Re: The Return of Cards?

From: Alexander Johannesen <alexander.johannesen_at_nyob>
Date: Sat, 5 Oct 2013 22:25:53 +1000
To: NGC4LIB_at_LISTSERV.ND.EDU
Hiya,

James Weinheimer <weinheimer.jim.l_at_gmail.com> wrote:
> But there is no stopping the trajectory.

Oh, I can think of a few, but we don't have to go much further than brain
complexity and markup for a real show-stopper. All experiments so far with
controlling cognition and synapse patterns are so trivial that we shouldn't
read too much into it. The strong AI and singularity folks are a little too
eager to project possibilities as real projectories that they forget just
how complex the problem really is. Now I've worked in AI trying to make
strong AI, so I'm not talking out of my arse here; passing a Turing test is
easy, because it can be faked due to that very thing that makes strong AI
so difficult; the brain is complex and faulty and fuzzy and prone to lapses
in chemistry (internal as well as external) and affected in big ways by
things we think insignificant. The problem is *so* much more complex than
we first thought / hoped. Remember back when we discovered genes and
thought that the right set of genes make some trait? Well, it wasn't that
simple. In fact, it was terribly more complex and ... fuzzy. RNA and
epigenetics and protein buildup of alleles and whatsnot.

Short version; it won't happen as soon as people think. I _used_ to think
it would happen sooner, that we somehow could replicate brain behaviour
with software. 20 years of trying very hard and studying tons of biology
has at least me convinced that most of what people think is possible is,
well, SciFi. We will be able to do some things, especially playing with
cognition, but beyond that we're still very much in the dark.

> So, concerning Alex's idea of a "human model of understanding" and
> Karen's comment that there is no single model

As to clarify; I was pointing out that in the past, when making software
systems, there has been little to no understanding of human models of
understanding going into their designs, and that in the last few years
it's  slowly been going the other way. UX and information design and data
models are slowly getting a human cognitive makeover, and I'm mostly
referring to simple things such as upper ontologies becoming more common,
and ontologies at all are becoming a middle layer of information management
we can actually start using. Most if not all knowledge management have
happened through the possibilities embedded in data models (which percolate
their semantics up into the various interaction areas of the system), but
now the data models are becoming more ... flexible. Not perfect, not right,
nor correct, nor necessarily good enough, but there's a trend we can see, a
sign of things to come when we aren't so sure about how human knowledge is
to be represented in our systems. And that's a good thing; it's a
revolution.

> Google results are quite different if you are in the US or Argentina or
Norway
>  or Italy but this is mainly with languages and advertising.

Had a really good one this morning. I quickly looked up "pannekaker"
(Norwegian for 'pancakes') in google.com.au (Australia), and was given
nutritional information in English as well as links to recipes in
Norwegian. Quite clever, even if somewhat trivial. I think the AI they've
got for parsing queries are getting better all the time, and along *that*
projectory I see many cool things happening.

But then, just like Siri is impressive the first week or two, you find
yourself getting annoyed at all the things she *should* know but doesn't.
We humans are selfish creatures; if we can have some but not all, we get
cranky very fast. :)

> But returning to cards, there does seem to be a fundamental paradox: how
> to place more and more information on smaller and smaller screens?

Not sure that is true? Do screens get smaller? I see a variety of
resolutions and sizes, but they operate within a somewhat human scale of
"right size"?

> I know that the military has done a lot of research into the "heads-up"
> displays of fighter pilots: how much information they can process and
> how quickly they can do it; what information is vital and what can safely
> be ignored, and so on.

Usability to the rescue. Or psychology. And cognitive science. And they
will tell you the answer; 7 +/- 2 (ie. 5 to 9) conceptual things at one
time. If you've got more data, conceptualize it to bring the numbers down;
it seems to fit the way the brain deals with complexity (and is one of
those fandangled "human models of understanding" things most researchers
would agree with, even if the model obviously isn't true, just a guideline,
disclaimer, disclaimer).

> Much of what I have seen with APIs and linked data has reminded me
> of adding more and more "stuff" on top of a hamburger (pickles and
> onions and mustard and ketchup ...). At some point, it just gets too much

Information overload - or, as Steve Pepper talks about it, infoglut - is
becoming more and more important to how we interact with information. It's
a delicate balance between too much and too little. So what's the success
of Twitter? They will tell you; 140 characters is just right between too
much and too little (in no scientific term), and they went with it.

So what are libraries doing with information design of their data? Do they
care?

Regards,

Alex
Received on Sat Oct 05 2013 - 08:26:31 EDT