- Smarter with AI
- Posts
- SunBrief#24: ChatGPT Can Recall Your Past
SunBrief#24: ChatGPT Can Recall Your Past
MIT Exposes Flaws in AI Morality while YouTube Launches AI Clone Detection

Welcome to the SunBrief
Today in SunBrief š
Is your Business Ready for AI? Find Out Today
ChatGPT Can Recall Your Past
Turn AI hype into a business win
Stock Updates
MIT Exposes Flaws in AI Morality
AI Highlights of the Week
YouTube Launches AI Clone Detection
Too Important to Miss
Is your Business Ready for AI? Find Out Today
AI is everywhere, but knowing where and how to start can be overwhelming.
Brewster's AI Maturity Audit provides the clarity you need. In just 60 days, you'll receive a custom roadmap tailored to your business, highlighting actionable AI use cases.
Real AI success doesnāt begin with technology aloneāit requires organizational maturity and a strong foundational readiness.
If we donāt uncover AI opportunities that exceed your audit cost, you get your money back. Take the first step towards confident AI integrationāstart your audit today!

ChatGPT Can Recall Your Past
āChatGPT Introduces Long-Term Memory for Enhanced Personalization
OpenAI has rolled out a significant update to ChatGPT, enabling the AI to remember information from users' past conversations, thereby delivering more personalized and context-aware interactions.
Key Features:
Dual Memory System: ChatGPT now utilizes two types of memory: "Saved Memories," where users can instruct the AI to remember specific details like names or preferences, and "Chat History Reference," which allows the AI to automatically recall insights from previous conversations to inform future responses.
User Control: Users maintain full control over the memory feature, with options to enable or disable it, review or delete specific memories, and use temporary chats that do not store any data.
Professional Utility: The enhanced memory is particularly beneficial for professionals who use ChatGPT for tasks such as writing, planning, or technical assistance, as it allows for more consistent and efficient interactions without the need to repeat information.
This update is currently available to ChatGPT Plus and Pro users, with plans to extend it to Enterprise and Education accounts in the near future
How do you feel about AI remembering your past interactions for personalization? |
Turn AI hype into a business win
Youāve heard the hype. Itās time for results.
For all the buzz around agentic AI, most companies still aren't seeing results. But that's about to change. See real agentic workflows in action, hear success stories from our beta testers, and learn how to align your IT and business teams.
Stock Updates

MIT Exposes Flaws in AI Morality
MIT study finds that AI doesnāt have values
A recent study from MIT critically examines the notion that artificial intelligence systems possess inherent values or preferences, suggesting instead that these systems primarily imitate and generate outputs without genuine understanding.
Key Insights:
Inconsistency in AI Responses: Researchers evaluated models from leading AI developers, including Meta, Google, Mistral, OpenAI, and Anthropic. They found that these models exhibited inconsistent responses to prompts, often changing viewpoints based on slight variations in input phrasing.
Lack of Stable Preferences: The study indicates that AI models do not maintain stable, coherent beliefs or preferences. Instead, they tend to produce outputs that reflect patterns in their training data, lacking genuine understanding or value systems.
Implications for AI Alignment: These findings raise concerns about the challenges in aligning AI behavior with human values, as the models' outputs can be unpredictable and lack a consistent internal framework.
This research underscores the importance of cautious interpretation of AI outputs and the need for robust frameworks to ensure AI systems operate in ways that are consistent with human ethical standards.
Should we be concerned about AI systems developing unintended behaviors? |
AI Highlights of the Week
Meta Enhances Teen Safety Across Facebook and Messenger
Meta is expanding its "Teen Accounts" safety features to Facebook and Messenger, following similar 2024 updates on Instagram. Teens under 16 will need parental permission to go live, and explicit images in messages will be automatically blurred. The move comes amid growing legal and political pressure over youth mental health concerns.
UK Develops Crime Forecasting Tool
The UK government is piloting a data-driven project, now called "Sharing Data to Improve Risk Assessment," to explore whether predictive analytics can better identify risks of serious violent crimes like homicide. Using historical police and government data, the tool assesses risk levels for research and policy developmentānot judicial use. Critics warn it may reinforce existing biases and disproportionately impact low-income and minority communities.
OpenAI Sues Musk Over āShamā Takeover Bid
OpenAI has countersued Elon Musk, accusing him of bad-faith tactics to disrupt the company and attempt a hostile takeover. Filed in a California court, the suit follows Musk's earlier legal actions and a $97.4 billion "sham" bid. OpenAI aims to stop what it calls Muskās unlawful and damaging behavior.
Microsoft Flags AI Weak Spot: Debugging
A Microsoft Research study found that top AI models, like Claude 3.7 Sonnet and OpenAIās o3-mini, struggle with software debugging, scoring below 50% on the SWE-bench Lite benchmark. Claude 3.7 Sonnet led with 48.4%. Researchers cite a lack of training data reflecting the iterative human debugging process as a key limitation.
Samsungās Ballie Gets Smarter with Google Gemini
Samsung's long-awaited Ballie home robot launches in the U.S. this summer. The ball-shaped, two-wheeled device includes a projector, speaker, and mic, and it controls smart home tech via SmartThings. Powered by Google's Gemini AI and Samsung's models, Ballie offers personalized tips based on audio, visual, and environmental data. Hands-on access remains limited, with demos tightly managed.
YouTube Launches AI Clone Detection
The Article: YouTube Expands AI Deepfake Detection to Top Creators
YouTube is broadening its pilot program aimed at identifying and managing AI-generated content that mimics the likenesses of creators, artists, and other influential figures.
Key Features:
Likeness Detection Technology: Building upon its existing Content ID system, YouTube's new tool automatically detects AI-generated faces and voices in uploaded videos, helping to identify unauthorized deepfakes.
Pilot Program Expansion: Initially launched in partnership with the Creative Artists Agency (CAA) in December 2024, the program now includes top creators such as MrBeast, Mark Rober, Doctor Mike, the Flow Podcast, Marques Brownlee, and Estude MatemƔtica.
Legislative Support: YouTube has publicly endorsed the NO FAKES Act, a bipartisan bill reintroduced by Senators Chris Coons and Marsha Blackburn. The legislation aims to regulate unauthorized AI-generated replicas of individuals' faces, voices, and names, empowering individuals to request the removal of such content.
How do you feel about AI agents acting on your behalf online? |
Too Important to Miss
Last Weekās Poll Result
Do you think AI models like TxGemma can speed up drug discovery?
Yes, 26.71%. Maybe, 64.29%. No, 00.00%.
Do you consider Metaās Llama 4 models truly open-source?
Yes, 75.00%. Partially, 25.00%. No, 00.00%.
How do you feel about AI agents acting on your behalf online?
Excited, 20.00%. Cautiously optimistic, 20.00%. Concerned, 60.00%.
Feedback
Weād love to hear from you!How did you feel about today's SunBrief? Your feedback helps us improve and deliver the best possible content. |
Know someone who may be interested?
And that's a wrap on todayās SunBrief!
Reply