So, I’m in the shower yesterday morning and I started thinking about that “Dragon, naturally speaking” commercial I’ve been seeing all over the place on TV. “How does it know to bold the previous word, when the guy says, ‘bold that.’?” I think to myself. “It has to know the context…. Well, it must know a couple of dozen commands. It sensed a pause, and dropped into command-sensing mode. It took the next words it heard and checked them against its list of known commands. One of them was a match. Hmm, would the human brain do it that way?”
That’s when I envisioned this whole array of agents riding above the part of our brain that turns sounds into word symbols. Each agent listens to the symbols. There must be thousands of them. Each agent has an output that signals how close it considers the symbol stream to be to it’s command. There’s another layer above that listens to all the outputs. When the monitoring layer receives a strong signal from one of its agents, it knows what command has been given.
It probably doesn’t work that way in the human brain. That’s more like how I’d program it on my computer, but it gets me thinking more about context.
“How does the brain know what the context of the command is? What’s involved in defining a context. How can I write software that could build contexts through training from an external environment? What structures do I need to have in place to allow contexts to be built? Hey, what does a plain and simple context look like? How does the context shift as the situation or environment changes?”
Context: a series of conditions that define a known state? Is that too simple?
If the human mind has a symbolic model of the world within it and our senses attempt to keep that model in sync with the external environment, then context would be our current belief of the state of our world. I imagine that there is also a model of ourselves inside our mind. Maybe that’s how we hear ourselves think. Anyway, as we grow and learn, are we constantly monitoring the inputs from the environment (or our own mind) and building new contexts. There must be a mechanism within our brain that decides the conditions are right to create a new context. I don’t care if this is actually how the human brain works, I’m just interested in whether it is practical for building a machine that can learn.
So, what if there are lower, basic, contexts whose outputs are sent to another layer of higher contexts, and so on…. Can you build a complex understanding with that? What do you do with a context, anyway? If it is a set of conditions that define a state, then those conditions must be important information. Maybe a context is also like a gateway. When the inputs meet certain condition, the inputs are amplified and sent on to another layer, or even other sections for more processing?
There must be a lot of context conditions: millions, maybe billions. Hmm, I wonder if this is even the way the brain processes information? Is there any practical idea here?
I think I’ve lost my context.