Convenience with a Twist: How Artificial Intelligence Puts your Data at Risk

Do you know how often you use artificial intelligence tools every day?

Many people associate artificial intelligence or "AI" with robots and science fiction movies, but the reality is that if you use a cell phone, digital assistant, or a computer, then you are likely already interacting with AI technology.

The point of most AI technologies is to make your life easier. To do that, they "listen" to you and help you or answer your questions. That means some of these technologies are always on and, therefore, always tracking your activities. That's why it's important to understand how AI works and what you can do to protect your personally identifiable information when using AI and other automated technologies.

Examples of AI

When you use a voice assistant, such as Siri or Cortana, on your phone, Alexa on the Amazon Echo, or Google Home, you are interacting with AI. AI is also built into services, such as Facebook Messenger, using technology called chatbots. In all cases, when you use the technology, it stores data about you and begins to make predictions about your preferences and tastes.

If it seems like an app really gets you and makes great suggestions or recommendations for products or other things, it's probably using AI.

Sending your data to the cloud ... and beyond?

Depending on how it's configured AI technology can gather a lot of information on you, ranging from what you asked it or requested or said, to where you are, to your mood, and many other things. Here are some privacy challenges inherent to this process:

  1. Data travels. Your information is passed back and forth through the cloud. Your phone or device can't process the question or request, it must send the information to a server somewhere.
  2. Data storage. Your information is stored for future reference. The reason AI technology works well and improves over time is because it relies on previous questions and interactions.
  3. Security and privacy unknowns. Different companies have different levels of security and privacy practices, which are usually hard to understand and may change frequently.

 

Think first, then ask

AI technology may be convenient, but it's also designed to figure out ways to profit from your data, and your information could potentially be hacked or exposed in a data breach. That's why it's best to follow these tips:

  1. Limit sharing. Don't share important personal details that hackers or fraudsters could use to steal your identity or commit other types of fraud.
  2. Limit use. If a device or app is designed to always listen in the background, either adjust the settings according to your privacy comfort levels, or only turn it on when you have a question.
  3. Assume the worst. Hacking and data breaches happen all the time, and there's no reason to believe that AI is safer to use than any other type of technology.