andrewducker: (book power)
The thing I hate about ontologies is the same thing I hate about all ideologies, the fact that people take them too damn seriously.

An ontology, if I might briefly explain, before I lose 95% of my audience, is just a fancier way of saying "a description of a bunch of things and the relationships between them". At least that's how it's used in computing - the term comes from philosophy, where ontology is the study of the nature of reality - which is, after all, the largest set of things that exists (the imagination is, possible larger, but then you're studying the largest set of things that _don't_ exist, a quite different problem).

Anyway - the problem with ontologies is that they're made up of definitions - definitions of the things and definitions of the relationships between the things. And, obviously, descriptions are not ever the same as the thing itself. Descriptions are, by their very nature, generalisations, metaphors and simplifications of the thing. They say "X is like Y, except for the bits that are like Z, and the other bits that are more like A, B and C." When you get right down to it, descriptions are rooted in experience - they are attempts to relate one thing we haven't experienced to other things we have. Experience itself is not transmissible from person to person, we're forced to find points in common (or rather, points we believe are in common) and use those to draw a mental picture, find something that sounds right, and get the feeling across to the other person.

Anyway - all of this means that descriptions don't match reality - they match bits of it, from certain angles, for periods of time. Which means that there's no such thing as a perfect ontology - only useful ones for certain purposes, which match onto the things we're interested in, in ways that produce useful results.

All of which makes me sound as if I think that ontologies are 99% useless, which I don't. I find them incredibly useful - because I work with them all of the time. I work for a large financial company, programming computers to pass data around, and make sure that when people buy financial products they get what they paid for, and that the right bits of data end up in the right places. In order to do so, we take certain chunks of the customer experience (names, addresses, amounts of money, investment funds), create descriptions of them, and use these descriptions to pass data around, store it, and perform operations on it. If two different parts of the system have different ideas of what an investment fund looks like then they can't talk to one another without an extra bit of code being written which translates between them.

All of which makes it extremely clear to me on a day to day basis that:
(a) most of my job is nothing to do with actual coding - it's practical ontology engineering. I take concepts like "payments" and "withdrawals" and "increments" and map them into computer code. When our understanding of the thing changes the ontologies also have to change.with it. If we discover that people like to do something with their payments that our current model doesn't allow for, then we have to re-engineer the model to allow for it. Actually writing the code has nothing on understanding the concepts in the first place.
(b) attempts to make everyone follow any kind of ontology that's both large enough to be useful and strict enough to be precise it's not going to be useful to many people. In computing ontologies have turned out to be very useful for either sending simple bits of information (RSS feeds, which only contain a few different types of data) or for dealing with one companies data (web APIs, that largely tie you down to the website you're trying to talk to). Getting multiple groups of people to sign up to using common descriptions for their data hasn't been successful because they don't all think of them the same way and aren't likely to without a good reason to throw away all of their existing systems and start over.

Incidentally, the failure of ontologies to map onto reality is also the reason why AI is never going to work in the way that people originally imagined it might (extremely logical systems solving all the problems of the world). You can either have the ultra-efficiency of high-speed logic, or you can have actual understanding. The miracle of the human mind is that it sits on the border between the two, where they can inform each other - we understand things better than computers do, and we can think logically better than any other animal. Finding the way to translate between experience and logic, between the inductive and deductive, is where any answer will come from. AI, when it does reach that SFnal stage, will be understanding things through experience, not definitions, and thus will suffer all the problems that people do. Just not this week.

April 2025

S M T W T F S
   1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 2223242526
27282930   

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Apr. 23rd, 2025 07:05 am
Powered by Dreamwidth Studios