In the mid 1990’s my college roommate could not stop talking about a series of courses he was enrolled in within the biology department, or how these biology professors were describing future technologies that would allow cars to drive themselves, robots to speak like humans, computers that can write their own software, it goes on, etc. It’s never been clear how my roommate even enrolled in these courses in the first place as he was the only one I knew at the entire university who heard about them. I was busy proving Nyquist’s theorem each week, convinced that this was not going to lead to self-driving cars. His enthusiasm for these courses was completely over-the-top, and his passion for the subject matter was contagious. He kept telling anyone who would listen about how this new technology he was learning about was going to lead to a whole host of seemingly unbelievable societal-level changes that sounded more science-fiction than reality. I was skeptical.
I had undertaken the required biology courses and had a hard time imagining how Golgi Apparatus and photosynthesis would lead to cars driving themselves.
My roommate at the time was talking about neural networks. I figured that after college I would never hear about his neural networks. I was wrong.
It would be almost 20 years later when I heard the term neural networks again. The next time I was sitting in a board room in downtown San Francisco listening to the CEO of a unacorn technology company who was then my customer for several years explain how deep learning and neural networks were going to revolutionize his company. Enthralled with his descriptions, for a brief moment, I contemplated if the CEO was also a biology major.
For several years working with my customer, I was commuting to San Francisco on a regular basis while assisting in the migration of their multi-petabyte data and analytics infrastructure to the cloud (that customer ended up becoming one of the largest AWS migrations of that era). It became a running joke among the team about how each subsequent trip, the number of billboard signs driving between SFO and downtown San Francisco appeared with the two letters “AI”
The letters “A-I” were multiplying on every trip — it was just like cell division for highway signage.
I don’t recall my college roommate ever mentioning the term “AI” or “artificial intelligence.” For the better part of two years of college, though, he talked about neural networks non-stop.
For the past several years, almost every new or re-invented company we read about is an AI company. I’m not sure how that’s even possible, yet here we are.
There is an emerging discussion about what constitutes good and bad AI — not just from an ethical perspective but also from a functional perspective.
Unfortunately, there are a lot of AI branded capabilities that just do not work.
Poorly performing AI cannot predict correctly, it cannot classify correctly, it is vulnerable from a cybersecurity perspective, it is not durable or reliable, and it’s outcomes lead to bad decision-making. What is even more concerning is that it is difficult for most organizations to diagnose or quantify AI performance. The issues that lead to poorly performing machine learning models over a longer period of time are more complex than just properly tagged training data or model drift. Robust situational awareness is required.
How reliable are the neural networks classifying and predicting outcomes? Most organizations aren’t sure, but just glad they have AI
Zectonal is focused on assisting customers achieve an enhanced fundamental level of situational awareness to provide a complete understanding of the health and vulnerabilities of their entire AI ecosystem. Non-functional AI should have no place in this world.
Are you tired of hearing about a specific over-hyped AI capability that performs poorly? Are you ready to make an impact on the industry? Learn more about what situational awareness really means to us. Reach out to us at email@example.com