Strong AI is a common goal of many computer scientists. So far, machine-learning techniques have created amazing results in narrow fields but haven’t produced something we could all call “intelligent.”
Today’s wave of AI technology is still being driven by the ANN neuron pioneered decades ago. Given recent advances in neuroscience research, we now know a lot more about how neurons work together than we did when ANNs were created. Systems with a more realistic neuronal model will be more likely to produce strong AI.
Hierarchical temporal memory (HTM) is a realistic biologically constrained model of the pyramidal neuron reflecting today’s most recent neocortical research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense is being used. Matthew Taylor offers an overview of core HTM concepts, including sparse distributed representations, spatial pooling, and temporal memory.
Matt Taylor is the open source community manager for the Numenta Platform for Intelligent Computing, where he spends most of his time managing, encouraging, and interacting with the NuPIC OS community. Matt has been working with and on open source projects for years. Originally from a farming community in rural Missouri, Matt now lives in California and increasingly finds it hard to leave.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com