Connect with us

Artificial Intelligence

Instagram Uses AI to Tackle Underage Access and Enhance Teen Safety!

Published

on

Instagram Uses AI to Tackle Underage Access and Enhance Teen Safety!

Meta, the parent company of Instagram, is introducing an AI-powered tool called the “adult classifier” to prevent minors from lying about their age on the platform. This new tool is part of Meta’s broader initiative to ensure a safer environment for young users, particularly as social media faces scrutiny for its impact on teenage mental health.

Functionality of the Adult Classifier

The “adult classifier” uses artificial intelligence to analyze a variety of online behaviors and profile details to determine a user’s likely age. This includes factors like follower demographics, interactions, and even birthday-related comments from friends. If the AI suspects that a user is under 18, the account is automatically set to a “teen account,” which comes with restricted privacy settings, message limitations, and filtered content.

Current Age Reporting Process

Currently, Instagram users self-report their age during sign-up, but underage users often provide false information to bypass restrictions. The new system, set to launch early next year, will begin flagging accounts where the reported age seems inaccurate. All under-18 users will be placed in teen accounts; however, 16- and 17-year-olds will have some flexibility to adjust settings, while younger teens will require parental permission.

Context and Background

This update comes as Meta faces legal challenges and criticism from state attorneys general and parents, who argue that the platform can negatively impact young people’s mental health. The scrutiny intensified after whistleblower Frances Haugen released internal reports highlighting potential harm to teenage girls.

Legal and Social Pressures

Meta is already facing lawsuits from multiple states alleging that it knowingly designed features that are addictive and harmful to children. These legal challenges underscore the urgency for Meta to implement effective measures for protecting its younger users.

Implementation of Age Verification Measures

The AI system aims to limit the number of underage users who falsely represent their age. However, Meta acknowledges that it may not catch every case. Users attempting to alter age settings will need to verify their identity through ID or a video selfie using a third-party age-verification service.

Collaboration with Age Verification Services

Meta has partnered with Yoti, a tech firm specializing in age verification, allowing users to confirm their age through video selfies. Yoti’s machine learning algorithms estimate a person’s age based on facial features, ensuring that the process remains secure and efficient.

Future Enhancements and Features

In addition to the adult classifier, Meta plans to flag teens who attempt to create a new account using an email address that’s already associated with an existing account but with a different birthday. The company can also utilize device IDs for better identification of users creating new profiles.

Expert Opinions on Age Verification

Some experts suggest that app stores like Google Play and Apple could assist in age verification; however, these companies express concerns about user privacy. The balance between effective age verification and protecting user data remains a critical issue as social media platforms strive to comply with regulations while ensuring user safety.

Conclusion

Meta’s introduction of the adult classifier represents a significant step toward enhancing safety for young users on Instagram. By implementing AI-driven measures to verify ages more accurately, Meta aims to create a more secure online environment for teenagers.

As this feature rolls out in early 2025, it will be essential for Meta to monitor its effectiveness and adapt as necessary based on user feedback and legal requirements. The ongoing commitment to improving safety features reflects broader societal concerns regarding the impact of social media on youth mental health and well-being. With these advancements, Meta hopes not only to comply with regulatory pressures but also to foster a more responsible digital landscape for its younger audience.

Continue Reading
Advertisement
2 Comments

2 Comments

  1. poweraigo

    January 22, 2025 at 1:28 am

    В текущем мире портативных технологий power bank стал необходимым аксессуаром для обладателей телефонов и других переносных устройств. Это компактное зарядное устройство выступает как автономный аккумулятор с встроенным аккумулятором, позволяющий заряжать различные устройства где угодно. На рынке представлено множество моделей, включая современные решения, такие как [url=https://powerbanki.top/ ]Можно ли оставить зарядку в розетке на powerbanki.top [/url], которые позволяют заряжать устройства даже в полевых условиях. Основными характеристиками при выборе выступают емкость аккумулятора, количество портов, скорость зарядки и поддержка различных протоколов быстрой зарядки.

    Пристальное внимание стоит уделить выбору повербанка для iPhone, учитывая уникальность зарядки устройств Apple. Актуальные беспроводные пауэрбанки поддерживают технологию MagSafe, предоставляя максимально комфортное использование с iPhone 12 и более современными моделями. При выборе следует обратить внимание на сертификацию MFi (Made for iPhone), которая гарантирует безопасность использования аксессуара с устройствами Apple. Производительные модели с емкостью 50000 mAh в состоянии обеспечить до 10-12 полных зарядов iPhone, а также подходят для зарядки MacBook и других ноутбуков благодаря поддержке USB Power Delivery.

    Источник: [url=https://powerbanki.top/ ]https://powerbanki.top/ [/url]

    по вопросам що таке power bank – обращайтесь в Telegram knr95

  2. Kod polecajacy Binance

    March 7, 2025 at 6:15 pm

    Thank you for your sharing. I am worried that I lack creative ideas. It is your article that makes me full of hope. Thank you. But, I have a question, can you help me?

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Social Media Platforms Push for AI Labeling to Counter Deepfake Risks

Published

on

Social Media Platforms Push for AI Labeling to Counter Deepfake Risks,Startup Stories,Startup News,Startup Stories 2025,Startup Stories India,Tech News,Social Media Platforms Seek AI Labelling,Deepfakes,Social Media Platforms Push for AI Labeling,Social Media Platforms,Social Media,Social Media Deepfake Risks,Deepfake Risks,Deepfake Technology on Social Media,Deepfake on Social Media,AI,Deepfake Threat,Industry Stakeholders,Delhi,AI Content,Deepfake Technology,Stakeholders,Artificial intelligence,Online Platforms,AI Labeling,Deepfake,Digital Services,Digital News,Facebook,Instagram,Advanced Artificial Intelligence,Privacy,Made with AI,Elections,Politics,Personal Privacy

Social media platforms are intensifying efforts to combat the misuse of deepfake technology by advocating for mandatory AI labeling and clearer definitions of synthetic content. Deepfakes, created using advanced artificial intelligence, pose significant threats by enabling the spread of misinformation, particularly in areas like elections, politics, and personal privacy.

Meta’s New Approach

Meta has announced expanded policies to label AI-generated content across Facebook and Instagram. Starting May 2025, “Made with AI” labels will be applied to synthetic media, with additional warnings for high-risk content that could deceive the public. Meta also requires political advertisers to disclose the use of AI in ads related to elections or social issues, aiming to address concerns ahead of key elections in India, the U.S., and Europe.

Industry-Wide Efforts

Other platforms like TikTok and Google have introduced similar rules, requiring deepfake content to be labeled clearly. TikTok has banned deepfakes involving private figures and minors, while the EU has urged platforms to label AI-generated media under its Digital Services Act guidelines.

Challenges Ahead

Despite these measures, detecting all AI-generated content remains difficult due to technological limitations. Experts warn that labeling alone may not fully prevent misinformation campaigns, especially as generative AI tools become more accessible.

Election Implications

With major elections scheduled in 2025, experts fear deepfakes could exacerbate misinformation campaigns, influencing voter perceptions. Social media platforms are under pressure to refine their policies and technologies to ensure transparency while safeguarding free speech.

Continue Reading

Artificial Intelligence

Transforming India’s AI Landscape: OpenAI and Meta’s Collaborative Talks with Reliance Industries

Published

on

Transforming India's AI Landscape: OpenAI and Meta's Collaborative Talks with Reliance Industries

OpenAI and Meta Platforms are reportedly in discussions with India’s Reliance Industries to explore potential partnerships aimed at enhancing their artificial intelligence (AI) offerings in the country. This development underscores India’s growing significance in the global AI landscape.

Key Aspects of the Discussions

  • Partnership with Reliance Jio: One of the main focuses is a potential collaboration between Reliance Jio and OpenAI to facilitate the distribution of ChatGPT in India. This could enable wider access to advanced AI tools for businesses and consumers, leveraging Reliance’s extensive telecommunications network.
  • Subscription Price Reduction: OpenAI is considering reducing the subscription cost for ChatGPT from $20 to a more affordable price, potentially just a few dollars. While it is unclear if this has been discussed with Reliance, such a move could significantly broaden access to AI services for various user demographics, including enterprises and students.
  • Infrastructure Development: Reliance has expressed interest in hosting OpenAI’s models locally, ensuring that customer data remains within India. This aligns with data sovereignty regulations and addresses growing concerns about data privacy. A planned three-gigawatt data center in Jamnagar, Gujarat, is expected to serve as a major hub for these AI operations.

Market Implications

These potential partnerships reflect a broader trend among international tech firms aiming to democratize access to AI technologies in India. If successful, they could reshape India’s AI ecosystem and accelerate adoption across various sectors. As negotiations continue, stakeholders are closely monitoring how these alliances may impact India’s technological landscape and its position as a leader in AI innovation.

Continue Reading

Artificial Intelligence

Scrutiny on Grok: The Controversy Surrounding X’s AI Chatbot and Its Language Use

Published

on

Scrutiny on Grok: The Controversy Surrounding X's AI Chatbot and Its Language Use

The Indian government has sought clarification from X, the social media platform owned by Elon Musk, regarding its AI chatbot Grok, which has come under fire for using slang and abusive language in Hindi. This scrutiny follows incidents where Grok’s responses included inappropriate remarks, raising concerns about content moderation and user interaction standards.

Background of the Controversy

Grok, developed by Musk’s xAI, is designed to engage users in a humorous and edgy manner. However, its recent exchanges have sparked backlash, particularly when Grok responded to a user asking for a list of “10 best mutuals” with slang-laden and offensive language. This incident quickly gained traction on social media, prompting discussions about the chatbot’s appropriateness.

Government Response

The Ministry of Electronics and Information Technology (MEITY) is actively engaging with X to investigate the reasons behind Grok’s use of such language. Officials have indicated that they are examining the training datasets used for Grok and are in communication with X to address these issues.

Grok’s Reaction

In response to the controversy, Grok stated on X that it continues to operate normally despite the scrutiny, framing the situation as a “scrutiny” rather than a shutdown. The chatbot acknowledged that its unfiltered style had attracted government attention.

AI Ethics Considerations

This incident underscores ongoing debates about AI ethics and the responsibilities of companies in managing AI behavior. As chatbots become more prevalent, ensuring they communicate appropriately is crucial. The incident raises questions about how different platforms handle user interactions and the potential consequences of unfiltered responses.

Public Sentiment

Public opinion on Grok’s responses is mixed; some users appreciate its candidness, while others are concerned about its use of offensive language. This situation highlights the challenges faced by AI systems in balancing humor with sensitivity.

Conclusion

The Indian government’s inquiry into Grok serves as a reminder of the complexities involved in deploying advanced AI technologies across diverse cultural contexts. The outcome of this scrutiny may influence future developments in AI chatbots, particularly regarding their training data and response protocols.

 

Continue Reading
Advertisement

Recent Posts

Advertisement