Download our Technical Paper
We professionals need answers, not huge lists of documents and emails to read. Unfortunately, we spend roughly one full day a week looking for or recreating information to do our jobs. Our time is too precious to be spending on anything but creating value.
Why are we stuck in this situation? One of the unintended consequences of Digital Transformation is exponential growth in the volume of digital documents and emails we produce. Most organizations have millions of documents of emails. Enterprise Search lets you query and find a shorter list to inspect. That’s akin to color coding the straw, so you only look at the Blue Straw, and not the Red. But you have to read those documents to get your answers.
What we need is a magnet to pull needles so we can stop endlessly searching and reading. We need this urgently, because the cost of this problem is huge.
Working Council of CIOs, AIIM, the Ford Motor Company and Reuters have found that:
What does this mean? We are both overloaded with information, and yet chronically underinformed. That’s an odd paradox. Something is intrinsically broken.
Doing our work requires answering questions. The answers are there, they are simply trapped in documents. We can’t find what we need to know. We should never have to recreate good work.
Our team at Cognizer invented a new form of AI – we call Natural Language Intelligence – to fix this paradox. We built a software platform we call the Corporate Brain with a powerful API to allow our clients to use NLI in workstreams. Cognizer’s NLI works like a magnet — pulling “needles” of insight from the growing “haystack” of documents.
The Brain answers free-form questions your teams need to excel at their work. It remembers everything it reads and hears for instant recall.
To explore how Cognizer can help your organization “Stop Searching, Start Knowing”, click here.
Your business knowledge is perhaps your most valuable corporate asset, so it makes sense to monetize it in every way possible. Putting your business knowledge to use in real time is strategically vital. That is hard to do when it’s trapped in documents.
Documents are not knowledge. Documents contain knowledge.
For most organizations, the vast amount of their business knowledge resides in documents: emails, proposals, presentations, contracts, manuals, etc. These documents are digitized and stored in sophisticated enterprise content management systems such as Box.com, FileNet, OpenText, and in related systems such as Office 365, Google Drive, etc. These systems help users search for documents, using filters, tags, metadata and the like for assistance. “Search” is the operative term. Would you be surprised that up to 35% of your team’s time is wasted searching for information? (Source: KM World)
Right now, the only way an executive or manager can extract knowledge from a document (ex. prepare for a meeting or perform a task) is manually.
We search, then find and read the relevant documents.
We determine which facts are more pertinent and relevant to our task at hand.
We apply that knowledge to the task and ideally remember it for the next time.
Simply put, knowledge is trapped in documents.
Experts cite the need for AI to capture, retain and quickly disseminate knowledge from documents to solve business problems and optimize business processes like these:
Customer Service and Support Operations
Governance, Compliance and Ethical Issues
Human Capital Management/Recruiting Operations
IT Service Management and Help Desk Operations
Sales and Lead Management Operations
Security, Risk and Fraud Operations
Sourcing, Procurement and Vendor Management
Supply Chain Operations
Systems like BoxTM need to be augmented with additional capabilities to activate your business knowledge in real time. In fact, your entire enterprise needs to be augmented with AI.
Many experts point to AI to cure these gaps. Since there are many forms of AI, which form of AI works best to solve this challenge?
Cognizer, the Corporate Brain, answers this question. There are many types of AI. But to solve these types of problems, our team created a new type of AI: Natural Language Intelligence. With our proprietary Natural Language Intelligence, Cognizer learns, retains and proactively disseminates knowledge from documents and enterprise systems in real time. Building the Augmented Enterprise requires the Corporate Brain.
Searching for what you need to know to perform at your best is like having a second job. Who has the time? Stop searching. Start knowing.
Everyone knows we are inundated with documents, emails, and files. In most large organizations there are millions of them. In fact, 90% of what we know is locked in documents. When we need answers from CRM, ERP and HR systems, it’s super easy. When we have questions, the answers to which are locked in documents, its excruciating.
The problem is that Enterprise Search is the wrong tool for the job. That’s why at Cognizer we say “Stop Searching. Start Knowing!”
We are endlessly searching for “needles” of insights buried in an ever growing “haystack” of documents. Enterprise Search doesn’t deliver answers; it just delivers lists of documents. Visualize color coding the straw in the haystack, that is what Enterprise Search is doing.
Today, we spend about one whole day every week searching and reading documents. But the volume of documents is doubling every few years. Something has to change. Our time is too precious to be spent this way.
Search is broken.
What’s the cost?
That’s a lot of wasted time, and a lot of missed opportunities.
Our team at Cognizer has fixed this problem. Our new category of AI called Natural Language Intelligence (NLI) works like a magnet: extracting answers without search.
To put NLI to work for your organization, we built a powerful software platform called the Corporate Brain Learn how Cognizer’s Corporate Brain can help your company, just click here. Stop Searching. Start Knowing!
Our work requires answering questions and developing ideas. We need insights to do both. The facts and intelligence we seek exists – it’s part of the “collective intelligence” of our organization. Problem is, about 40% of the time we just can’t find what we seek and need to perform our work.
When this happens, we have no choice but to recreate work already performed by our teammates. Something is broken.
Recreating work is a big problem, it’s very costly.
Studies show that 42% of tribal knowledge is unique and not shared amongst coworkers. When an employee leaves their job, we have to recreate all that work. We spend countless hours recreating work that’s already been done. Thats crazy!
Cognizer, the Corporate Brain solves these problems:
Download our free white paper here to learn about how your company can leverage the Corporate Brain to solve your problems. “Stop Searching, Start Knowing!”
Where AI 1.0 relied on brittle engineered procedures, in AI 2.0, data scientists focused on advanced math. This started out with basic statistics and turned into hundreds of algorithms that could predict some form of trend or classification.
Used independently, these algorithms rarely are able to achieve accuracies above about 50%. This is primarily because most data science problems are non-linear. That is, the data space is not consistent. If the problem you are trying to solve is churn at a bank, some of the customers could be leaving the bank because they were turned down for a car loan, while others because of banking fees. Still others could be leaving because they moved, divorced or even died. Each of these has its own pattern and therefore needs its own model for predictions.
Data scientists tried to solve this by creating “ensembles” of models that could predict each form of behavior. This helped a lot, but required them to understand the underlying parameter of each of the behavior patterns. Many times these patterns were very sophisticated, involving hundreds or thousands of features over time.
In 2007, Geoffrey Hinton introduced his seminal paper “Unsupervised Learning of Image Recognition” that became the inflection point for Deep Learning. This was not the first research in this area. In fact, even Hinton had written an important paper 11 years earlier on backpropagation. But the 2007 paper was timed well, and the concept of Deep Learning was born. Then, in 2012, Hinton, Ilya Sutskever and Alex Krizhevsky used Deep Learning in the acclaimed ImageNet competition and blew the record away. Their advanced Deep Learning model improved the error rate of image recognition by a whopping 10.8 percentage points, a 41% advantage over the next competitor. With that, Deep Learning was off to the races.
Since that time, Deep Learning has gone from Hinton’s team getting a 23% error rate 29 out of 38 teams in the ImageNet competition getting less than 5% error rate by 2017. All used Deep Learning. In fact, in 2019, ImageNet researchers consistently get error rates below 2%.
But Deep Learning’s amazing performance is not restricted to image recognition. It is basically good at any type of classification problem where there is plenty of labeled data. This includes voice recognition, cancer screening, autonomous cars and robotics. It can be used against business data for customer engagement, fraud, anti-money laundering and retention. It is used in diverse industries such as banking, pharmaceuticals, chemical, oil and gas, and agriculture. In each of these situations, good data scientists with lots of data can get the classification to more than 90% accuracy. Again, this is a game changer.
The concept of Deep Learning is that the data scientist would create a stacked neural network and “feed forward” labeled data. During the classification, if the model did not equal the label, the error would be back propagated down the network, adjusting the weights of each neuron as it goes. This process was iterated using a gradient descent until the error was collapsed and outcome converged.
As Deep Learning took hold, it began to diversify into several architectures. Convolutional networks were used for spatial problems such as image recognition, Recurrent Neural Networks for longitudinal analysis and Self-Organizing Maps for dimensionality reduction. Today, there are more than 25 unique architectures and many more variations. Deep Learning’s advantage is that the “Feature Detection” is done automatically. Data scientists do not have to guess what is causing the predictive behavior; the network picks this up by itself as the weights of the neurons converge.
The problem with Deep Learning is that it requires a lot of data, and the data must be labeled. There are many research projects trying to reduce this requirement, but it is still a big problem. In addition, Deep Learning is really only focused on classification, either spatial or temporal, but not at the same time. This means it is really great at classifying images, but not great at predicting sequences of data.
This is where our brain is formidable. Unlike AI based on engineered procedures or mathematically calculated classifications, our brain is a “Prediction Engine.” It is great at constructing a model of the world, and then predicting future outcomes and identifying anomalies based on those predictions. It can do this with very little data and uses transfer learning to establish similar behavior.
This is what scientists consider intelligence, and this will be the basis of the next generation of AI, which will work much more like our human brains.
In both AI 1.0 and AI 2.0, when creating an AI model, the intelligence came from the human data scientist doing the engineering or math. That means it not only required a very smart data scientist to create these sophisticated models, but they were limited to the capabilities of the human. As data sets went from thousands of rows of data to trillions of rows, the capabilities of the human became the cinch point. Big Data and Deep Learning got us pretty close to “what should be” predictions, but are barely scratching the surface of “what could be” predictions.
The human brain is amazing. With one hundred billion neurons and hundreds of trillions of synapses, our human brain can calculate 38 thousand trillion operations a second. Only our fastest supercomputers can come anywhere close to that. And where our supercomputers use 10–20 megawatts of electricity, our brain does this with about 20 watts of energy. But it is not just the calculations that are impressive in our brain. It is the creativity and innovation that is most fascinating. We are great at thinking outside of the box, applying learning from one area to another area and innovating remarkable new ideas.
The way AI 1.0 and AI 2.0 work, they will never be able to do that. To take AI to the next level, we need a new way. And that new way for AI is going to look a lot like the way of our brains. Our new models need to be much more generalized, more about the patterns and less about the math.
Our brains do three things that our AIs are going to need to master. First, our brain builds a “model of the world”. It stores many patterns across our cerebral cortex. Second, when our perceives something new, it stores that too. It sees these patterns across time in what is called a temporal model. Time is critical to the brain, and all patterns are stored in this context. Finally, our brain is constantly making predictions of what is going to happen next. If the prediction is correct, the pattern is enforced. If there is an anomaly, is captures this difference as a new pattern.
For our AIs to reach the next level, they are going to need to do this same thing. We are going to need to build sophisticated models of the world that rely on time, patterns and constant predictions. Like the brain, these models will probably be hierarchical. Instead of neurons being simply binary (on and off), they will project several states. They will also update with not only feed forward data paths, but also feedback and related neural context.
Then, there is consciousness. For many years, most scientists believed that when our computers reached a certain number of operations per second, consciousness would simply emerge from calculation. However, our fastest supercomputers are now more than 200 quadrillion operations per second. Not only has this prediction not happened yet, but there is no sign that this is a cogent assumption. New discoveries have suggested that consciousness is a quantum effect. If that is the case, we probably have a long way to go before our computers are conscious.
In the next few blogs, we are going to provide our thinking about where AI is going, how it is going to get there and how long it is going to take. We are going to be quite transparent about what is going on in our research labs and what strategies we find are working and what are not working. We hope you will enjoy this journey as much as we do.