It is strange, at least to me, that computers are said to store things in "memory." No one
says they "remember" things. Or "forget" ANYTHING! In fact, they don't. Every bit goes to an address,
an enumerated address; if all else fails, which it doesn't, the machine can just be made to review all
addresses in sequence, 'til it finds what you are looking for. It's there, all right - though
debatably worse for wear.
Thus computers are not the obvious place to study forgetting, or remembering, or reminding. But I think
it is still possible. I suspect that what living things remember are intensely analyzed, highly processed,
before they end up as engrams (whatever those are, exactly). I also suspect that if a computer is programmed
to do similar analysis and processing before it files something away, then getting that thing back becomes
delightfully problematic. The machine forgets what it saw. The machine does remember, only not right now. The machine insists
it remembers right now - only it's totally wrong, or not fully right.
Herein, one model. I have designed it to account for
- the present reasonably stimulating a reminiscence
- the present unreasonably stimulating some other reminiscence
- the present not stimulating any reminiscence, though it should have
My assumptions are
- every perception is stored (but that doesn't mean you can retrieve it).
- the present, or rather your perception of it, is never edited or filtered (but how you catalogue it for future recovery is another matter entirely).
- nothing is ever erased (but things may be misplaced).
- your entire past is always reviewable (but whether you do that, always, is uncertain).
- "loops" that review the past, or the way you filed the past, really aren't loops in the computer-programming
sense: you don't start at the beginning and proceed forward, or from the end backward. You
somehow see it all at once. Or almost all at once: this review is not infinitely fast.
- such decay as there is in any of these processes is not random. It may in fact be, but assuming such
seems, for any model of failure, the easy way out.
And my objectives are
- make my assumptions clear, because the real purpose of any computer modeling is to make you confess what you don't know
- allow, in the model, for compensation in response to failure. Whether this really happens in dementia, I don't know, but my father improved markedly for a few months on certain medications, and while this is not a biochemical model, any model must allow for short-term recoveries of lost ground.
- come up with some testable hypotheses - that is, speculate usefully on what might make dementia better, or worse, or just something other than relentless.