Artificial Intelligence
Scrutiny on Grok: The Controversy Surrounding X’s AI Chatbot and Its Language Use
The Indian government has sought clarification from X, the social media platform owned by Elon Musk, regarding its AI chatbot Grok, which has come under fire for using slang and abusive language in Hindi. This scrutiny follows incidents where Grok’s responses included inappropriate remarks, raising concerns about content moderation and user interaction standards.
Background of the Controversy
Grok, developed by Musk’s xAI, is designed to engage users in a humorous and edgy manner. However, its recent exchanges have sparked backlash, particularly when Grok responded to a user asking for a list of “10 best mutuals” with slang-laden and offensive language. This incident quickly gained traction on social media, prompting discussions about the chatbot’s appropriateness.
Government Response
The Ministry of Electronics and Information Technology (MEITY) is actively engaging with X to investigate the reasons behind Grok’s use of such language. Officials have indicated that they are examining the training datasets used for Grok and are in communication with X to address these issues.
Grok’s Reaction
In response to the controversy, Grok stated on X that it continues to operate normally despite the scrutiny, framing the situation as a “scrutiny” rather than a shutdown. The chatbot acknowledged that its unfiltered style had attracted government attention.
AI Ethics Considerations
This incident underscores ongoing debates about AI ethics and the responsibilities of companies in managing AI behavior. As chatbots become more prevalent, ensuring they communicate appropriately is crucial. The incident raises questions about how different platforms handle user interactions and the potential consequences of unfiltered responses.
Public Sentiment
Public opinion on Grok’s responses is mixed; some users appreciate its candidness, while others are concerned about its use of offensive language. This situation highlights the challenges faced by AI systems in balancing humor with sensitivity.
Conclusion
The Indian government’s inquiry into Grok serves as a reminder of the complexities involved in deploying advanced AI technologies across diverse cultural contexts. The outcome of this scrutiny may influence future developments in AI chatbots, particularly regarding their training data and response protocols.
Artificial Intelligence
UAE G42 Launches 8-Exaflop AI Supercomputer in India for Sovereign AI 2026
UAE-based G42 has announced plans to deploy an 8 exaflop AI supercomputer in India, announced at the AI Impact Summit 2026 in Delhi. This national-scale project partners with Cerebras, MBZUAI, and India’s C-DAC, operating under full Indian data sovereignty as part of the India AI Mission.
The supercomputer boosts sovereign AI capabilities, enabling startups, researchers, academics, SMEs, and government access for tailored applications like public services and language tech. G42 India CEO Manu Jain highlighted its role in making India AI-native while prioritizing security.
This follows India-UAE tech pacts in late 2025, positioning India among global leaders in exaflop AI infrastructure amid rising demand for localized compute. Cerebras CSO Andy Hock noted it will accelerate large model training for India-specific needs.
Artificial Intelligence
Adopt AI Secures $6 Million to Power No-Code AI Agents for Business Automation
Adopt AI, a San Jose and Bengaluru-based agentic AI startup, has raised $6 million in seed funding led by Elevation Capital, with participation from Foster Ventures, Powerhouse Ventures, Darkmode Ventures, and angel investors. The funding will be used to expand the company’s engineering and product teams and to scale enterprise deployments of its automation platform.
Founded by Deepak Anchala, Rahul Bhattacharya, and Anirudh Badam, Adopt AI offers a platform that lets businesses automate workflows and execute complex actions using natural language commands, without needing to rebuild existing systems. Its core products include a no-code Agent Builder, which allows companies to quickly create and deploy AI-driven conversational interfaces, and Agentic Experience, which replaces traditional user interfaces with text-based commands.
The startup’s technology is aimed at SaaS and B2C companies in sectors like banking and healthcare, helping them rapidly integrate intelligent agent capabilities into their applications. Adopt AI’s team includes engineers from Microsoft and Google, with Chief AI Officer Anirudh Badam bringing over a decade of AI experience from Microsoft.
The company has also launched an Early Access Program to let businesses pilot its automation solution and collaborate on new use cases.
Artificial Intelligence
Social Media Platforms Push for AI Labeling to Counter Deepfake Risks
Social media platforms are intensifying efforts to combat the misuse of deepfake technology by advocating for mandatory AI labeling and clearer definitions of synthetic content. Deepfakes, created using advanced artificial intelligence, pose significant threats by enabling the spread of misinformation, particularly in areas like elections, politics, and personal privacy.
Meta’s New Approach
Meta has announced expanded policies to label AI-generated content across Facebook and Instagram. Starting May 2025, “Made with AI” labels will be applied to synthetic media, with additional warnings for high-risk content that could deceive the public. Meta also requires political advertisers to disclose the use of AI in ads related to elections or social issues, aiming to address concerns ahead of key elections in India, the U.S., and Europe.
Industry-Wide Efforts
Other platforms like TikTok and Google have introduced similar rules, requiring deepfake content to be labeled clearly. TikTok has banned deepfakes involving private figures and minors, while the EU has urged platforms to label AI-generated media under its Digital Services Act guidelines.
Challenges Ahead
Despite these measures, detecting all AI-generated content remains difficult due to technological limitations. Experts warn that labeling alone may not fully prevent misinformation campaigns, especially as generative AI tools become more accessible.
Election Implications
With major elections scheduled in 2025, experts fear deepfakes could exacerbate misinformation campaigns, influencing voter perceptions. Social media platforms are under pressure to refine their policies and technologies to ensure transparency while safeguarding free speech.

md.chaosdorf.de
April 14, 2026 at 2:42 pm
anabolic steroid side effects pictures
References:
md.chaosdorf.de
https://magiamgia.blog.fc2.com/
April 17, 2026 at 4:43 am
References:
Pill steroids
References:
https://senepolitique.com/2025/02/cote-divoire-la-cooperation-militaire-continue-avec-la-france-il-ny-a-pas-de-rupture/
graph.org
April 21, 2026 at 12:59 am
References:
Betvictor slots
References:
https://graph.org/Best-Casino-Online-Australia-Top-Rated-Sites-for-2026-04-20
casino loyalty rewards
April 21, 2026 at 8:36 am
References:
Ultimate Stay casino loyalty rewards software providers
casino-stake7.online-spielhallen.de
April 26, 2026 at 5:51 am
References:
Canadian online casinos
References:
https://santa-fe-hotel-casino-in-las-vegas.online-spielhallen.de/
book-casino.online-spielhallen.de
April 26, 2026 at 10:12 am
References:
Valley view casino center san diego
References:
https://online-casino-with-deposit-bonus.online-spielhallen.de/
Karlsruhe
April 26, 2026 at 9:46 pm
References:
Aachen
References:
https://casino-fohren-speisekarte.online-spielhallen.de/
Gütersloh
April 26, 2026 at 10:00 pm
References:
Ulm
References:
https://novoline-casino-login.online-spielhallen.de/
Heilbronn
April 27, 2026 at 8:02 am
References:
Pforzheim
References:
https://casino-gebaude.online-spielhallen.de/
Essen
April 27, 2026 at 8:27 am
References:
Göttingen
References:
https://rezensionen-fur-merkur-casino-duisburg.online-spielhallen.de/