Re: Standard way of graphics in Linux



On a sunny day (Thu, 21 Sep 2006 16:54:08 +0200) it happened Herbert Kleebauer
<klee@xxxxxxxxx> wrote in <4512A790.5165D39F@xxxxxxxxx>:

Bernhard Agthe wrote:


Start high-level and go
low-level later - so you first understand the mechanisms and interfaces
before you try to program them...

That's wrong an that is the main reason for inefficient code. First
understand the low level mechanism and then you will be able to
write efficient HL code. First learn assembly programming so you
know what's an easy and what's a hard job for a CPU. Then learn how
a compiler breaks down HL code to machine code and then you are
able to write HL code in a way so it can be efficient executed on
a processor (and therefore you nearly never have to write the assembler
code yourself anymore).

I'm not interested in writing applications, I'm interested to
understand how things work at the lowest (user mode) level. Any code
which is linked (dynamic or static) to your code and executed in
the context of the user program just hides things away. And because
it is executed in the normal program context, you can write the
very same CPU instructions yourself in your program. Interesting is
only the interface which leaves the program context (int or syscall
interface to the OS or the socket interface to the Xlib), because
the things which are done behind this interface (accessing the
hardware, or managing system resources) can't be done in a normal
user program. That's the reason, why I program in assembler (I don't
even use a linker but generate any byte in the executable including
the ELF header myself as part of the program source) and not in C
and why I use the native int80 interface to the Linux kernel and not
the libc wrapper or why I will try to use the socket connection to the
X server and not the xlib wrapper (which, as far as I have read, is
a very simple wrapper).

Oh, I See.
Maybe you want to go all the way, and design the hardware, and then write
the OS in asm too.
Nothing extraordinary, I did it years ago with CP/M, designed the Z80
system hardware, and wrote a CP/M clone.
But wait, that level is a bit high.
What are these chips? So design your own memory and processor.
After all memory can be done statically with a flipflop, we will use
transistors for that, actually I did it with TTL, make my own RISC processor.
But hey, that level is too high still?
OK, what is a transistor? Well, now it gets a bit tricky to make one yourself,
but hey, no probelm, we will use relays.
So, if you are a man, use relais, hardwire the program.



.



Relevant Pages

  • Re: The Great Date Debate [Was Re: Layout Hell-o]
    ... design - you just have an inflexible one. ... the interface on and off" - but we did get them to agree to let us make ... The reason they were allowed to do this is that the regulation has ... but everything in a car is tangible. ...
    (comp.lang.cobol)
  • Re: Does Python really follow its philosophy of "Readability counts"?
    ... Separating an interface from its implementation reduces the ... whatever reason, it's unwilling to reveal to me. ... And `enforced data hiding' just slams ... Irrelevant for read-only inspection. ...
    (comp.lang.python)
  • Re: WinXP 64-bit Virtual DMA_ADAPTER
    ... Such interface abstracts from any internal implementation ... system that runs without Verifier. ... If MSFT wants a black box OS, ... reason why it should work differently under 64. ...
    (microsoft.public.development.device.drivers)
  • RE: Who changed /proc/<pid>/ in 2.6.0-test5-bk9?
    ... kernel. ... A process has unity of interface. ... you cannot conjoin-clone and thread-clone repeatedly and variously from the ... If there is no reason to share, ...
    (Linux-Kernel)
  • Re: Create libFOO.a from libFOO.so?
    ... or crash when executed on arbitrary Linux system. ... the problem is that e.g. getpwuid() ... use undocumented/private/binary interface to libnss*, ... There is no reason this conversion can't be done reliably. ...
    (comp.unix.programmer)