A few months ago, Snapchat implemented a few new features that harnessed the power of AI.
One feature is Snapchat Dreams. Using facial recognition technology, Snapchat scans a person’s facial features and creates fictional scenes with them as the star. This technology is used in other platforms such as TikTok filters and iMessage me-emojis.
Another interesting feature, AI Snaps, allows users to create snaps by typing in a prompt or choosing from some of the lists provided.
A while back, users of Snapchat were surprised when the Artificial intelligence, MyAI, on the app posted photos on its main story. Shortly after, the post was removed and the chatbot was disabled. This left users wondering how the AI works.
One of My AI’s features is that users can send photos in real-time to AI and generate images in response. This feature was debated highly on other social media platforms because most users are under 18.
While AI is a helpful tool for research purposes, others may argue that it poses a risk to children. AI-generated child sexual abuse material refers to the use of AI algorithms to create lifelike, but entirely fabricated, explicit content involving minors.
These AI tools have an unsettling ability to create content that looks shockingly real, blurring the lines between what’s authentic and what’s not for both parents and the authorities tasked with combating CSAM and protecting children.
These AI tools can allow predators to exploit children by building trust and engaging in explicit conversations.
AI is becoming unavoidable at this point, which parents must be aware of when allowing their children to use technology.
Tips for staying safe online include:
- Engage in open conversation
- Set clear boundaries
- Promote skepticism
- Monoton online activity
- Use privacy settings.
These are all things we can do to keep ourselves and our little birds safe as the Risks of AI consume our lives.