Okay, well the idea is learning by example. I have foreknowledge that as I'm typing now, I'm entering text into a field of a thread, but how do I know that?
I think out minds and computers are structurally incredibly similar, instrinisically, because we created them. So the way a computer thinks is mimicd by how we think. Why not make it easier to see so? Presently, someone is taught the interface of a computer in order to understand it, but why not represent the same patterns of thought humans experience so that the human-computer exerpience is almost of singularity.
I want to enable computers to display a 2D abstract of the information that is correlated to the information we are providing for their use. Whether, linerally, vertically, workflow chart, etc. - it's all a hierarchical structured pattern of processes. As I'm typing now, I have no direct effect on the computer, but a subconcept of its performance
So if the input = "Letter"
Level 1a - Keyboard / Hardware
1b - Keyboard input / "%d%d%d%d%d%d"
1c - Character type / "%c%c%c%c%c%c"
1d - ASCII / "0100 1100", "0110 0101", "0111 0100" "0111 0100" "0110 0101" "0111 0010"
2 - Charaters / "L", "e", "t", "t", "e", "e", "r"
The output is (// = not-directly interacting):
// Level 1 - Computer / MacBook Pro
// 2 - Operating system / OS X
// 3 - Application / Safari
4 - Website / https://discussions.apple.com/etc.
5 - Thread / "Letter"
The end purpose is to design a framework for computers to express input and ouput in a hierarchical feature that mirrors the way the mind works so that you can learn by example. And, to allow everything controllable to be associable:
Everything I'm using:
Hardware
Software
API-Internet
And we can restructure the hardware, software, and API-not limited to Internet, so that as much as you can imagine, you can make real associating input to output.