Do you have a family safe word to protect you from AI being used maliciously? I realized very recently why it was important to reinstate the family safe word to protect us from AI spoofing. When leaving for a short work trip, I left my phone in the taxi! Immediately I knew that I had left the item, but he had already driven away. OMG, my life is on that phone!
I needed to find a way to get in touch with the driver but had two key problems.
Problem One:
I didn’t have a phone or his number. I asked the man checking boarding passes prior to security if there was a phone somewhere that I could use. That very nice man loaned me his personal phone! First problem fixed. I had a phone to use.
Problem Two:
I had to text someone who was awake as it was incredibly early. I knew my children were awake but did not know my son’s phone number (bad mom, need to memorize that). However, my 15-year-old daughter for some reason had teased me the week before about not knowing her phone number and made me memorize it, so I could get in touch with her. Second problem fixed.
Oh, and a Third Problem
I then realized I had a third problem. I am texting my daughter from an unknown phone. How can I let her know that it is me for her safety and so she will answer me. I made the text extremely specific with key words and names within it so she would know it was me. It worked! I got a hold of her to let her know what had happened so she could help.
When I came back from my trip though, I talked to the family about this experience. It taught me that for our safety in the new age of AI, that we need to put our safe word back in place to protect ourselves from AI spoofing. This will give us a safe way to let each other know for sure that it is actually one of us communicating and give us a way to check if it is scam.
(P.S. – I didn’t have my phone for two whole days when I was away. To get ahold of me, my family had to call the hotel and ask for my room. How old school is that!?)
What is AI spoofing?
So why do we need to protect against AI spoofing? What is it? Per Google, ‘AI Spoofing is the act of disguising a communication from an unknown source as being from a known, trusted source.’ This technique exploits the capabilities of AI to create convincing simulations of human behavior, voice, or appearance, often with malicious intent. As AI technology continues to advance, so does the sophistication and prevalence of AI spoofing attacks. This poses significant challenges to the integrity of online interactions and the security of sensitive information.
AI Texts
AI-generated texting has emerged as a powerful tool in various aspects of our lives, from customer service interactions to content creation and personal communication. This technology, powered by advanced natural language processing (NLP) algorithms, enables computers to generate text that closely resembles human speech, often indistinguishable from messages written by humans.
One of the primary challenges posed by AI-generated texting is the difficulty in distinguishing between messages authored by humans and those generated by AI algorithms.
As AI technology becomes increasingly sophisticated, AI-generated text can mimic the style, tone, and linguistic nuances of human communication with remarkable accuracy. So, a text to my kids might look like a text from me, but it’s from an unknown source trying to get something from them, that we wouldn’t want them to have.
While AI-generated texting offers numerous benefits, including efficiency, scalability, and personalization, it also raises concerns about authenticity, privacy, and security. Having a safe word becomes extremely important in this case. Serving as a means of verifying the person is who they say they are.
AI Voice
Another common form of AI spoofing involves the manipulation of voice recordings to create convincing imitations of individuals’ voices. With the proliferation of voice-controlled devices and virtual assistants, such as smart speakers and chatbots, voice spoofing presents a particularly insidious threat. Malicious actors can use AI algorithms to analyze and synthesize voice samples, allowing them to impersonate individuals and deceive unsuspecting victims into divulging sensitive information or carrying out fraudulent transactions.
This is scary stuff. You think you are talking to someone you know. It sounds like them and acts like them but isn’t them. Without an established safe word in place, you won’t know for sure.
Family Safe Word
Here’s where the safe word comes into play. By establishing a predetermined word or phrase known only to trusted parties, individuals can verify the authenticity of communication and confirm that they are indeed interacting with your family member. When receiving messages from unfamiliar or potentially AI-generated sources, or just a text or phone call that is weird, the family member can ask the ‘person’ to give the safe word. If they do not give the safe word, you will know it is not the actual person and not provide any further information.
Implementing a Family Safe Word
- Choose a unique and memorable word or phrase that is known only to trusted family members.
- Regularly reinforce the importance of the safe word and its role in verifying identities.
- Practice using the safe word in various communication scenarios to ensure familiarity and readiness.
While the concept of a family safe word is simple, its impact in safeguarding against AI spoofing cannot be overstated. By embracing this proactive approach to authentication and communication, families can navigate the complexities of the digital age safely.
Your Family’s Safe Word
Make some time this week to come up with a safe word for your family to protect itself from AI and talk about how and when to use it.
If you enjoyed this post, check out my post on the 20 most important minutes of the day for parents and children (posiliveity.com)