Everybody has a general idea of what a dystopian future would look like should machines run amok—something like Ex Machina meets Maximum Overdrive meets The Terminator. But what about all the events leading up to a hypothetical robot tyranny? How exactly does that play out over a span of decades or centuries? Someone might point to last year, when Facebook shut down an AI program that resulted in two bots talking to each other, as a potential starting point. But even that was overblown.

 

Make no mistake—the conversation around AI governance and evolution is essential. But we must avoid hysteria in favor of articulating exactly what it is about AI as it exists now that’s problematic in terms of security, scalability, and efficacy, which will help us build ethical frameworks for the future. And that begins by understanding how quickly AI’s innerworkings can become obscure to its creators, can achieve a “black box” effect.

 

What is an AI black box?

 

Deep learning is a type of machine learning based on mimicking the neural networks of the human brain, to the degree that these networks can train themselves via the parsing of huge amounts of data.

 

For example, back in 2012, Google introduced a neural network that was able to identify, with 75% accuracy, pictures of cats amongst 10 million randomly selected pictures. All of that was accomplished without ever labeling any of the images input into the network.

 

That was six years ago.

 

Flash-forward to the onset of self-driving cars and you can see how quickly and powerfully deep learning can evolve. Nvidia, an AI computing company that’s already introduced a self-driving car that was never explicitly trained to detect road markings, just released the Xavier SOC, an AI car-based system capable of performing 30 trillion operations per second.

 

So what’s all the “black box” talk about?

 

Deep learning neural networks are not only getting increasingly powerful, the reasoning behind their decision-making is becoming much less clear. For self-driving cars, that uncertainty could lead to fatal accidents. But, at a macro scale, an AI-driven power grid gone haywire could potentially shut down the entire national economy.

 

The fear is that, if we allow AI to become deeply embedded into our social, political, and economic fabric without fully understanding it, we’re creating one hell of a mess to clean up in the future. If that mess doesn’t clean us up first.

 

AI Is Already Everywhere

 

Unless you decide to run to a log cabin and cordon yourself off from the world, you’re daily interacting with AI more often than you’re not. From our cell phone and computer habits to our streaming television history to our credit card purchases to our GPS devices and so on, we’re constantly providing proprietary algorithms with even more information, which strengthens their prediction capabilities. But, in strengthening their capabilities, we’re compromising not only our privacy, but our access to the information and cutting-edge services for which we’re supposedly trading our personal information:

 

 

To Regulate or Not To Regulate?

 

In her article for PBS’ online magazine Nova Next, Bianca Datta explores the fine line between common-sense regulation and stifling AI innovation. Datta quotes former White House Deputy CTO Ed Felton:

 

“One way to do it is to create policies that are designed by looking at the big picture rather than being very closely tailored to the current state of technology . . . rather than dictating which technologies should be used in certain settings, it makes more sense to have a performance standard.”

 

Datta likens this performance standard to car safety regulations that require bumpers to withstand crashes up to a certain speed, rather than mandating how the bumpers should be constructed.

 

In a TechCrunch article, Kriti Sharma lays out some guidelines for avoiding AI black boxes within and across industries without heavily regulating:

 

    • Keep AI customer support transparent—customers should know that they’re chatting with bots and align their expectations accordingly, while businesses should be clear with customers on how the data surrounding their bot interactions will be used, with chat records always accessible by customers to help support their claims

 

  • Eliminate machine bias—creators must use bias detection testing that conforms to universal standards and testing protocols; this requires rigorous simulation of how AI with any and all customers
  • Ensure human safety—product engineers must test AI for usability, security, scalability, and safety in all facets of human interaction (mental and physical)
  • Share best practices within and outside organization—promote AI data that’s inclusive to all people who will use it, while also committing to production stoppages if AI doesn’t meet universal standards and testing protocols
  • Self-govern through ethics frameworks—through public-private partnerships, transparency and security-related solutions can be shared

 

 

The Future Exists in Collaboration

 

At tekMountain, one of the nation’s emerging innovation and entrepreneurial centers, we’ve continually leveraged public-private collaboration to help build a tech ecosystem from southeastern NC on out into the world. We know the difficult avenues that must be navigated in order to ensure local, state, and national industries are permitted to thrive while still remaining accountable.

 

As AI tech continues to make headway in the healthcare, human resources, and educational spheres, tekMountain looks to provide access to the mentorship and investment networks that will bring about the game-changing innovation our world thrives on.

 

Contact tekMountain today to learn more about how AI and deep learning can revolutionize your business today.

 

This blog was produced by the tekMountain Team of Sean AhlumAmanda Sipes, Kelly Brown and Bill DiNome with lead writer Zach Cioffi.

Comments are closed.