From an ontological perspective :-> And also from a technological one.
They wanted to build something that meant that you _couldn't_ copy data, you could only include it. Where I could quote you, quoting Bob, quoting Charlie, quoting the NY Times and the back end would put all those bits together, creating a chain all the way back - and enforce payment for all the bits that were copyright.
It's a massively locked down system - because they focussed on 'correctness' rather than 'utility'. So you couldn't do anything that would violate the integrity of the system. Whereas the systems that actually work best are ones where you've got the ability to put in a mish mash of stuff, tie it all together, and then caretake it a you go along. Which is what the WWW does.
So, for instance, in Xanadu, moving a page means updating all of the links that point to it. Desining _that_ into a system is insanely hard, because you have to know what points at your page. Which means a mechanism for telling every site you link to that you link to it, keeping track of that, updating whenever the list gets out of date, etc. Whereas with the WWW you just move the page - and if you feel like it you put in a redirection page saying "This page is now over here at XXX." - no design necessary, no tech necessary. It's not 'perfect' in that sometimes you'll lose pages, but it's a hell of a lot easier to manage in the first place, and doesn't lock you in to a particular way of working.
And solving the things that go along with the original 17 rules was the reason that Xanadu was _always_ 6 months away from working.
I hear that Google Wave is also having fun problems - because they, again, have lots of little problems they're trying to solve that each require heavyweight architectural solutions. They're not trying to be quite so complex and 'correct' in what they're doing, so I expect they'll eventually get there, but by the looks of things it's the kind of project that will be eternally owned by one company, because the interoperability is just too unwieldy to work well with others.
Oh, from an ontological point of view, it's daft. ;)
But then you'd have no problem with citations and attribution, fewer problems with copyright and plagiarism... from a strictly academic user's perspective, it'd be amazing.
But then it's almost the opposite of small pieces, loosely joined. And that I'd have to cite who said that and where and when before submitting this post would also reduce the poetry, somewhat.
A Xanadish approach would have been interesting during 1993 - 1996, for me, as pages got lost and were deleted a lot, and the spiders couldn't keep track. Doesn't happen so much any more.
It's a massively locked down system - because they focussed on 'correctness' rather than 'utility'.
This is the standard argument that the Web 'works' because of the 404 error. While 404 was something that distinguished the Web from contemporary open hypertext systems (namely Hyper-G and Microcosm) and Xanadu, the Web succeeded more because a) the protocol and data format definitions were freely available, and b) it was initially targeted at a user community rather than being a predominantly research system (as was the case with Hyper-G and Microcosm).
So, for instance, in Xanadu, moving a page means updating all of the links that point to it.
Moving a page? No such thing. Once a document is created with a given ID, it's there for perpetuity. If you want to refer to it by a different identifier, that's what transclusion is for.
a) the protocol and data format definitions were freely available, and b) it was initially targeted at a user community
Also - it was (fairly) simple to write both a client and a server. Xanadu was, from my understanding, insanely complex, because of all the situations it had to handle.
Not clear. The Front End-Back End protocol (from Xanadu Green, the version described in Literary Machines) would have pretty straightforward to implement on the client side, mainly because all of the heavy lifting was being done by the server. The Back End-Back End protocol (which was what I had to ask Ted about), which would have been key to the implementation of the servers, is a different matter.
The other key to the server side is the enfilade data structure. Ted didn't publish anything about enfilades until fairly recently (in the Udanax source release, as source code) because he believed them to be such a good idea that they were worth retaining as a trade secret. I haven't implemented enfilades, but from what I can tell, they should be no harder to implement than any other moderately complex hierarchical data structure.
As an aside, I'm increasingly struck that in the Web of the early 1990s (pre-Netscape) it was much easier to write browsers and servers than it is now.
HTTP/1.0? Doddle. HTML 2? Still pretty easy (and easier yet if you took the lazy route and didn't try to parse it as SGML first). No CSS, SVG, Javascript/ECMAscript, Flash. By 1995, I'd written special-purpose standalone servers and simple clients. I wouldn't want to try that now.
no subject
no subject
They wanted to build something that meant that you _couldn't_ copy data, you could only include it. Where I could quote you, quoting Bob, quoting Charlie, quoting the NY Times and the back end would put all those bits together, creating a chain all the way back - and enforce payment for all the bits that were copyright.
It's a massively locked down system - because they focussed on 'correctness' rather than 'utility'. So you couldn't do anything that would violate the integrity of the system. Whereas the systems that actually work best are ones where you've got the ability to put in a mish mash of stuff, tie it all together, and then caretake it a you go along. Which is what the WWW does.
So, for instance, in Xanadu, moving a page means updating all of the links that point to it. Desining _that_ into a system is insanely hard, because you have to know what points at your page. Which means a mechanism for telling every site you link to that you link to it, keeping track of that, updating whenever the list gets out of date, etc. Whereas with the WWW you just move the page - and if you feel like it you put in a redirection page saying "This page is now over here at XXX." - no design necessary, no tech necessary. It's not 'perfect' in that sometimes you'll lose pages, but it's a hell of a lot easier to manage in the first place, and doesn't lock you in to a particular way of working.
And solving the things that go along with the original 17 rules was the reason that Xanadu was _always_ 6 months away from working.
I hear that Google Wave is also having fun problems - because they, again, have lots of little problems they're trying to solve that each require heavyweight architectural solutions. They're not trying to be quite so complex and 'correct' in what they're doing, so I expect they'll eventually get there, but by the looks of things it's the kind of project that will be eternally owned by one company, because the interoperability is just too unwieldy to work well with others.
no subject
But then you'd have no problem with citations and attribution, fewer problems with copyright and plagiarism... from a strictly academic user's perspective, it'd be amazing.
But then it's almost the opposite of small pieces, loosely joined. And that I'd have to cite who said that and where and when before submitting this post would also reduce the poetry, somewhat.
A Xanadish approach would have been interesting during 1993 - 1996, for me, as pages got lost and were deleted a lot, and the spiders couldn't keep track. Doesn't happen so much any more.
no subject
This is the standard argument that the Web 'works' because of the 404 error. While 404 was something that distinguished the Web from contemporary open hypertext systems (namely Hyper-G and Microcosm) and Xanadu, the Web succeeded more because a) the protocol and data format definitions were freely available, and b) it was initially targeted at a user community rather than being a predominantly research system (as was the case with Hyper-G and Microcosm).
Moving a page? No such thing. Once a document is created with a given ID, it's there for perpetuity. If you want to refer to it by a different identifier, that's what transclusion is for.
no subject
Also - it was (fairly) simple to write both a client and a server. Xanadu was, from my understanding, insanely complex, because of all the situations it had to handle.
no subject
The other key to the server side is the enfilade data structure. Ted didn't publish anything about enfilades until fairly recently (in the Udanax source release, as source code) because he believed them to be such a good idea that they were worth retaining as a trade secret. I haven't implemented enfilades, but from what I can tell, they should be no harder to implement than any other moderately complex hierarchical data structure.
no subject
HTTP/1.0? Doddle. HTML 2? Still pretty easy (and easier yet if you took the lazy route and didn't try to parse it as SGML first). No CSS, SVG, Javascript/ECMAscript, Flash. By 1995, I'd written special-purpose standalone servers and simple clients. I wouldn't want to try that now.
no subject
Which reminds me of another quote - that every successful complex system started off as a successful simple system.
If the original spec had been HTML5+CSS+Javascript then I doubt it would have got far, the initial barriers would have been too high.
no subject
no subject
no subject
no subject