The Vibe Check . What This Issue Feelcs Like

Playing around with Google Gemini and Created This Image - By Jeneba

Letter From The Editor

We made it.

2025 was the year AI stopped being a novelty and became the new normal. Your browser thinks now. Your apps reason with you. Your tools act. And if you're reading this newsletter, you didn't just watch it happen; you were a pro participant.

I'm giving you my AI Wrapped 2025 ……the tools, the trends, the lessons, and the predictions that shaped my thinking this year. Plus: a preview of what I see coming in 2026 (spoiler: the hype era is over).

Think of this as our year-end reflection session. Grab your tea. Let's look back — and look ahead.

If I did a Spotify Wrapped for my AI life, it would look something like this:

  • Top tools: Perplexity and Claude (especially Claude Code, we go together real bad)

  • Most used prompt: "Help me think or curate…………………."

  • Biggest learning: AI literacy isn't about keeping up with the industry — it's about keeping up with yourself

  • Minutes saved: Too many to count (but the mental bandwidth I got back? Priceless)

What would YOUR AI Wrapped look like? Hit reply and tell me. I'm genuinely curious.

Jeneba

Jeneba 👩🏾‍💻

🎁 MY AI WRAPPED 2025

If Spotify did an AI Wrapped, here's what mine would say:

🏆 My Top AI Tool of 2025

Claude — and it's not even close.

Not because it's the "best" model (that's a moving target). But because it's the tool I think with. It’s where I do all my writing, vibe coding, and professional tech writing and ghosting when I’m freelancing. The one that matches my cognitive rhythm. The one I trust to push back when I'm being lazy and go deeper when I'm being vague.

Claude became my thinking partner this year. Not my replacement. Not my shortcut. My partner.

Runner-up: Perplexity for research. The citations changed how I verify information.

My Most-Used Prompt Pattern

"Before generating any output, identify the mental model I should be using to think through this problem. Name it. Analyze it. Then suggest a more efficient or creative mental model I should try. Show me what I’m missing and show me the blue ocean opportunity."

Adding this to any of your prompts will take it to the next level for sure.

💡 My Biggest AI Learning of 2025

Context is the new data.

The teams winning with AI aren't the ones with the best models. They're the ones with the best context systems: clean internal knowledge, decision rules, process maps, and curated data that ground AI in reality.

Generic prompts get generic outputs. Rich context gets a partnership.

My Biggest AI Mistake of 2025

Trusting outputs without verification during a time crunch, without doing deeper research. I’m not a copy and paste girlie, but sometimes I might skip due diligence.

AI is confident even when it's wrong. I learned this the hard way. Now I have a hard rule: the higher the stakes, the deeper the verification.

The AI Trend I'm Most Tired Of

"AI hacks" content.

The shortcuts. The "10x your productivity in 10 minutes" energy. The implication that AI fluency is about tricks instead of thinking.

Tools change. Thinking compounds. I'll keep saying it until the culture shifts.

I’m also so over the AI videos and graphics. The AI slop needs to stop.

Everyone is all of a sudden trying to be Basquiat when they are supposed to be doing their job.

The AI Trend I'm Most Excited About

Agentic workflows.

AI that doesn't just answer, it acts on it’s own. Multi-step reasoning. Workflow automation. Task completion.

But here's the catch: agents only work where workflow engineering exists. The promise lands when organizations stop deploying agents like magic and start deploying them like employees with identity, role clarity, access control, and accountability.

The Stat That Shaped My Year

Health and fitness is the #1 topic people ask AI about on mobile every hour, every day, all year.

(From Microsoft's 37.5 million conversation study)

This changed how I think about AI. It's not just a productivity tool. It's becoming a life companion. A confidant. A first stop for questions we're too scared to ask anyone else.

That's power. And it requires literacy.

What would YOUR AI Wrapped look like?

Hit reply and tell me:

  • Your top AI tool of 2025

  • Your biggest AI learning

  • The trend you're tired of

I'll feature the best responses in a future issue.

I put together a metaprompt to help you do a Spotify-style annual review of your life.

Check it out here and let me know if it helps. Tag me with your results @jenebawint or @jeneba.jpg on FB.

Check out the new tools I built. Sign up early if you want to be a part of my AI community launching in February 2026. https://prompt-party-playground.lovable.app

Three years ago I started using Chat GPT to do my performance reviews. This time around, I’ve built an entire system to track your wins in real time, so nothing gets lost.”
Lesson: “The truth is: ‘doing amazing work’ means nothing if you don’t document it, analyse patterns, and show how you honoured your vision and values.”
The Climb Dashboard is soft life in software form.
No more:
✖️ digging through Slack for receipts
✖️ wishing you had tracked Q1 wins
✖️ reconstructing impact from memory
✖️ panicking at review time

Your brain was not designed to be your database.
So I built one for you.

Check out the app here, if you need it for your annual review this year.

Because we're not building followers. We're building AI citizens.

Welcome to the movement.

What You Need To Know Right Now?

BEST OF AI 2025 - The Signals That Mattered

Here's my curated look back at the year:

The Model Wars

GPT-5.2 landed this month — OpenAI's "Code Red" response to Gemini 3. Three tiers: Instant, Thinking, Pro. Claims expert-level performance across 44 occupations. The race continues.

Claude Opus 4.5 brought long-context memory and hit 80.9% on SWE-bench coding benchmarks. Anthropic is playing a different game — safety-first, thoughtful deployment.

Gemini 3 - embedded itself into Google's ecosystem. The browser became an executive assistant. Google took the AI wars in 2025.

DeepSeek V3.2 proved open-source can rival proprietary. Gold medals in math olympiads. 70% cost reduction. MIT License. It’s free anyone can use it.

The Infrastructure Arms Race

  • Anthropic: $50B investment in AI facilities

  • Microsoft & Meta: $70B+ each in data centers

  • Nvidia acquires Groq: $20B deal for inference tech

  • Alphabet buys Intersect: $4.75B for data center infrastructure

This isn't hype. It's an arms race. The companies controlling infrastructure will control the AI future.

The Shift from Generative to Agentic

2025 was the year AI moved from "generate this output" to "complete this workflow."

  • Claude Code in Slack — reads conversation context, identifies repos, writes code, opens pull requests

  • Gemini Agent — triages Gmail, rebuilds calendars, books travel with approval checkpoints

  • ChatGPT + Instacart — grocery shopping without leaving the chat

The browser is becoming your executive assistant. The question isn't "what can AI make?" anymore. It's "what can AI do while I think about something else?"

🖤 Black Innovation Spotlight

Here's what the data actually shows:

Black-owned businesses are using AI at higher rates than non-Black businesses, applying it to problems mainstream tech overlooks. This isn't just adoption. It's innovation.

The funding is moving:

  • The DMZ Black Innovation Summit awarded a record $400,000 CAD to Black founders

  • Google for Startups Black Founders Fund has deployed $40M+ in equity-free awards, plus mentorship and cloud credits

  • The Black Innovation Zone reached new scale: 160+ featured Black-led startups and 40+ institutional partners

  • Amazon's generative AI initiatives are actively empowering Black entrepreneurs through platform support

Despite growth signals, Black founders still receive under 0.5% of total U.S. VC dollars and even fewer large AI funding rounds. The structural gap remains. But organizations like Black in AI are expanding community, research, and policy influence to change this.

The pattern: Black innovation isn't just catching up. It's redefining what should be built while the ecosystem works to close the capital gap.

Startup to Watch: Sonarly (YC 2026)

This one is for my vibe coders and software engineers………………………

What they're building: AI debugging and context-aware bug resolution for software teams. Tools that give AI agents contextual understanding of logs, sessions, and app context — vastly reducing developer cycle times.

Why it matters: Sonarly exemplifies the shift from generic LLM assistants to domain-aware AI agents that effectively integrate into existing professional workflows. This is the inflection point: AI that understands your specific context, not just general knowledge. I’ll be using it to debug and check or better yet, maintain code on all my Vibe coding projects.

The signal: The winners in 2026 won't be the best general models. They'll be the best context-specific applications built on top of them.

Executive TL;DR - What You Need To Know This Week

Here the tea on what’s moving the AI world right now:

Google Gemini Did It - By Jeneba

🔮 MY AI PREDICTIONS FOR 2026

2026 is the year we stop asking "Can it?" and start asking "How well, at what cost, and for whom?"

1. Evangelism Ends. Evaluation Becomes the New Status Symbol.

The hype era is over. 2026 becomes the year of AI proof, not AI hype. Every initiative gets forced into measurable accounting: accuracy, cost-to-serve, risk, and adoption rate inside real workflows.

Companies that can measure AI impact will win trust, funding, and regulatory goodwill.

2. Context Becomes the New Competitive Moat

The real winners in 2026 are teams who build domain-specific context systems: clean internal knowledge, decision rules, process maps, and curated data.

Public data quality deteriorates (AI-on-AI content). Enterprises realize they can't trust generic outputs without grounding.

Prediction: Organizations stop buying "AI" and start buying context infrastructure.

3. Agents Graduate from Toys to Teammates, But Only Where Workflow Engineering Exists

Agents become real, but only in environments with strong workflow design, permissions, audit logs, and human override. Service designers and process designers stand up 🙋🏾‍♀️

The competitive advantage shifts from "best model" to best human-agent collaboration design.

4. The AI Trust Crisis Peaks

2026 becomes the year of the Transparency Paradox: public demand for accountability rises, while companies disclose less.

Competitive pressure, safety concerns, IP protection, and geopolitical stakes all push secrecy while regulators and the public want the opposite.

Prediction: Trust becomes a market differentiator, not a compliance checkbox. We are moving from influencers and anybody who can build trust with their clients and audience.

5. Cognitive Health Becomes the Next Big Ethical Frontier

2026 intensifies debate about AI's cognitive impact on attention, critical thinking, and skill atrophy, especially in education and early-career knowledge workers.

The question we'll be asking: Are we getting smarter, or lazier?

Organizations that invest in AI literacy + thinking frameworks will outperform those who only roll out tools.

My 2026 AI Thesis

2026 is the year AI stops being a spectacle and becomes a governed, measured, human-in-the-loop capability where winners are the ones who can prove impact, protect trust, and design augmentation without cognitive collapse or human judgement.

How I Evaluate AI in 2026

To cut through the noise, I run every AI development through four lenses:

  1. Utility Lens: Where does this create measurable value inside my real workflows?

  2. Trust Lens: What governance makes adoption psychologically and legally safe?

  3. Sovereignty Lens: Who controls the data, inference, and infrastructure?

  4. Human Lens: Does this amplify my agency or reduce thinking, privacy, dignity?

If it doesn't pass all four? It's probably hype.

Pretty Minds On AI

Welcome to Pretty Minds On AI — where the art, psychology, and humanity of AI live.

The Tea on What to Learn in AI: Signal vs. Noise

The best thing you can do for yourself this year is learn how to distinguish between signal and noise and how to train your internal picker to spot noise faster.

We live in an environment of constant interruption in the attention economy.

Alerts. Opinions. Trends. Takes. Hot takes. Advice. Performance.

Algorithms competing for your attention.

Your brain was not designed for this volume.

At this point, we don't just need noise-canceling headphones for our ears.

We need them for our minds.

This shows up as attentional overload and signal dilution when too many competing inputs reduce your ability to detect what actually matters. When everything feels urgent, nothing is important. When everything sounds convincing, discernment collapses.

So the work is not "consume better content."

The work is thinking better.

Five Keys to Recognizing Signal vs. Noise

1. Signal Has Direction. Noise Has Velocity.

Signal moves you toward something meaningful. Noise just moves fast.

Signal aligns with goal-relevant processing information that helps you make better decisions, deepen understanding, or take aligned action. Noise is high-arousal, low-utility information. It spikes dopamine, not insight.

Practice: Before engaging with information, ask:

  • Does this help me decide, build, or understand something specific?

  • Or does it just keep me entertained, activated, or anxious?

If it doesn't change how you think or act, it's probably noise.

2. Signal Reduces Cognitive Load. Noise Increases It.

Your brain has a limited working memory. Noise overwhelms it. Signal simplifies.

Information is valuable only if it reduces confusion or increases clarity. Noise feels "busy." Signal feels clean — even when it's complex.

Practice: After consuming something, notice your internal state:

  • Signal leaves you clearer, calmer, more oriented.

  • Noise leaves you scattered, agitated, or mentally fatigued.

Your nervous system is a better filter than your intellect.

3. Signal Is Consistent Over Time. Noise Is Reactive.

Signal holds up under reflection. Noise expires quickly.

Noise is often trend-based, outrage-driven, or optimized for immediacy. Signal tends to be principle-based; it repeats across time, contexts, and domains because it reflects something fundamental.

Practice: Adopt a 24-hour rule for ideas:

  • If something still feels relevant tomorrow or next week, it may be a signal.

  • If it loses power once the emotion fades, it was likely noise.

Timelessness is a tell.

4. Signal Invites Agency. Noise Creates Dependence.

Signal strengthens your ability to think for yourself. Noise subtly trains you to outsource judgment.

Noise often exploits authority bias and social proof "They said it, so it must be true." Signal encourages internal validation and independent reasoning.

Practice: Ask:

  • Does this make me more capable of making my own decisions?

  • Or does it make me feel like I need more input, more opinions, more permission?

If your autonomy shrinks, you're dealing with noise.

5. Signal Integrates. Noise Fragments.

Signal connects ideas. Noise isolates them.

High-quality signal integrates into your existing mental models. Noise competes for attention without connecting to anything else. Over time, too much noise fractures your thinking.

Practice: At the end of the day, practice signal synthesis:

  • Write down one idea that genuinely mattered.

  • Connect it to something you already know, believe, or are building.

If it can't be integrated, it doesn't deserve long-term mental storage.

Most people try to solve the signal-to-noise problem by curating better feeds.

But the real upgrade is internal.

A refined mind doesn't need perfect inputs it has discernment. It knows when to pause, when to ignore, and when to lean in. That's not a productivity skill. That's a thinking skill. A life skill.

In a noisy world, clarity is a form of power.

And learning how to protect it is an act of self-respect.

Be careful out there. Wu Tang was right. Protect Ya Neck.

If you enjoyed this segment, I write stuff like this all the time on my LinkedIn newsletter.

💬 Prompt of the Week — Year-End Reflection

Close out the year with intention. Use this prompt to reflect on your AI journey:

=== MY AI WRAPPED 2025 ===

Help me reflect on my AI usage this year:

1. **Top Tools** — What AI tools did I use most? Why?
2. **Biggest Wins** — Where did AI save me time, improve my work, or unlock something new?
3. **Biggest Mistakes** — Where did I over-trust AI or use it poorly?
4. **Lessons Learned** — What do I know now that I didn't know in January?
5. **Gaps** — Where do I still need to build AI literacy?
6. **2026 Intentions** — How do I want to use AI differently next year?

Ask me clarifying questions. Challenge my thinking. Help me see patterns I might be missing.

🧠 AI Literacy Moment: Model Cards

One of the most important AI literacy skills for 2026? Understanding model cards.

Model cards are like "nutrition labels" on the back of food, but for AI models. They explain what the model is for, how it was built, where it should (and shouldn't) be used, and what its limitations are.

Federal procurement rules in the U.S. now require AI inventories and model cards. What started as an informal best practice is rapidly becoming an industry norm.

Why you should care:

Whether you're evaluating AI tools for your organization, advising clients, or just trying to make informed decisions, knowing how to read a model card gives you power.

The key sections to understand:

  1. Overview — What is this model? What does it do?

  2. Intended Use — What problems is it designed to solve?

  3. Out-of-Scope Use — What should it NOT be used for?

  4. Limitations — Where does it fail? What biases might it have?

  5. Responsible Use Guidance — How should humans oversee this?

The frame I use:

Every AI model is a tool with strengths and weaknesses. Model cards help you understand both — so you can use AI with intention, not blind trust.

  • "This model gives its best guess based on past examples. Sometimes it will be wrong or overconfident. That's why a human still needs to review important decisions."

  • "The model can only learn from the data it sees. If some groups are under-represented in the data, its answers for those cases may be less reliable."

  • "It works well for high-volume, lower-risk tasks. It should not be the final decision for high-stakes outcomes like hiring, firing, or medical decisions."

The literacy takeaway: Treat model cards like you'd treat the fine print on a contract. They tell you what you're actually getting and what you're not.

🔍 What Model to Use When - Your Quick Reference

One of the most common questions I get: "Which AI should I use?" Here's my breakdown of the major models and when each one shines.

CLAUDE (Anthropic)

Best for: Drafting and editing content, summarizing long documents, comparative analysis, coding assistance, internal knowledge search.

Key strength: Long-context reasoning and careful, "constitutional" safety alignment. This is my thinking partner.

Not intended for: Fully automated legal, medical, or financial decisions without expert review. High-stakes outcomes without human oversight.

Limitations to know: Can produce incorrect or fabricated information ("hallucinations"), even when sounding confident. May reflect biases in training data. Safety constraints may block some outputs — that's a feature, not a bug.

Responsible use: Treat Claude as a creative partner, not an authority. Verify important claims. Keep a human in the loop for high-impact decisions.

CHATGPT (OpenAI)

Best for: General Q&A, idea generation, content drafting, coding assistance, debugging, structured tasks via custom GPTs.

Key strength: Versatile, user-friendly chat interface with strong coding and content-generation capabilities. The ecosystem of plugins and custom GPTs adds flexibility.

Not intended for: Single source of truth for regulated decisions. Handling highly confidential data without enterprise agreements.

Limitations to know: Can generate plausible but incorrect information. Capabilities shift with new versions. Some features (browsing, plugins) change over time.

Responsible use: Check outputs against trusted sources. Define clear "do not paste" data categories. For client-facing use, disclose that AI assistance is used.

PERPLEXITY

Best for: Fast, source-linked research and synthesis. Summarizing articles and reports. Landscape scans and "what's out there" exploration.

Key strength: Retrieval-augmented answers with inline citations. Strong summarization across multiple sources. This is my research companion.

Not intended for: Replacing formal research, legal review, or due diligence. Acting as a compliance authority.

Limitations to know: Quality depends on what's available on the web. Even with citations, summaries can misinterpret sources. Live search doesn't guarantee completeness.

Responsible use: Use citations as starting points, click through, skim original sources. For high-impact decisions, treat answers as a briefing, not a final recommendation.

GEMINI (Google)

Best for: Drafting and refining text inside Google Docs, Gmail, Sheets. Analyzing data, generating formulas. Multimodal tasks (text + images).

Key strength: Deep integration with Google ecosystem. Strong multimodal capabilities. If you live in Google Workspace, this is native.

Not intended for: System of record for business-critical knowledge. Unsupervised decisions in regulated workflows.

Limitations to know: Can misread documents, images, or tables. May reflect biases in web training data. Workspace integration can make it "feel" like normal work tools, but it still has typical LLM failure modes.

Responsible use: Double-check AI-generated edits, formulas, and summaries before sharing externally. For external content, pair with a review checklist.

The frame I use for all models:

  • "This model gives its best guess based on past examples. Sometimes it will be wrong or overconfident. That's why a human still needs to review important decisions."

  • "The model can only learn from the data it sees. If some groups are under-represented, its answers for those cases may be less reliable."

  • "It works well for high-volume, lower-risk tasks. It should not be the final decision for high-stakes outcomes."

The literacy takeaway: Different models for different jobs. Know what each one does well and where it fails.

I made this app in Google AI Studio to give you a visual in less than 10 minutes. Check it out

Resources For The Week - Go From AI Ambition to Implementation

Level Up Your AI Literacy in 2026

If you're serious about building AI fluency, here's my curated learning pathway.

The Three Disciplines That Actually Make AI Work

According to recent research, three disciplines make the difference:

  1. Workflow Engineering — Designing how AI fits into real processes

  2. Prompt Engineering — Giving AI clear instructions

  3. Contextual Engineering — Building foundational knowledge systems

Most people focus only on #2. The winners focus on all three.

For someone serious about adoption and foundations (non-engineer to strategy/operator):

Step 1: Conceptual & Societal Grounding → Elements of AI – Part 1 (University of Helsinki, free)

Step 2: Organizational & GenAI Adoption Lens → AI for Everyone (Andrew Ng) or Google's AI Essentials path

Step 3: Structured Practice & Real-World Project → IBM AI Foundations for Everyone (build a chatbot, no coding)

Step 4: Responsible AI, Risk & Policy → AI Governance (Coursera) or ITU AI Governance in Practice

Step 5: Go Deeper Technically (Optional) → Building AI (Elements of AI – Part 2) or a beginner ML course

Flagship "AI for Everyone" Courses

AI for Everyone (DeepLearning.AI / Andrew Ng) Non-technical course explaining AI concepts, capabilities/limits, and strategy. No coding required. The gold standard for professionals who work with AI, not engineers.

Elements of AI – Part 1 (University of Helsinki) Free, self-paced course explaining what AI is, what it can and cannot do, and how it affects society. Designed explicitly for non-technical learners.

IBM AI Foundations for Everyone (Coursera) Beginner-friendly, free-to-audit course covering machine learning, neural networks, NLP, and computer vision in conceptual form. Culminates in building a Watson-powered chatbot.

AI Literacy & Workplace Skills

Google AI Essentials (Grow with Google) Short modules on AI basics and generative AI, focused on using AI tools to "supercharge your work or business."

OpenAI Academy Free learning paths on how to use OpenAI tools and understand generative AI concepts. Practical skills like prompting, workflow integration, and responsible use.

LinkedIn Learning – Building AI Literacy Multi-course path teaching core AI concepts, how to identify AI opportunities at work, and how to collaborate with AI tools.

Governance, Ethics & Responsible AI

AI Governance (Coursera) Focused on making AI tools safe, fair, and accountable. Covers risk management, compliance, AI security, and responsible AI. Geared toward business leaders and risk/governance roles.

ITU – AI Governance in Practice Live, cohort-based course for policymakers and public officials. Covers bias, explainability, oversight, audits, and strategic roadmapping.

What It Takes to Prompt Effectively

The 5 elements of an effective AI prompt:

  1. Role — Who should the AI be?

  2. Task — What specifically should it do?

  3. Context — What background information does it need?

  4. Format — How should it structure the output?

  5. Constraints — What should it avoid or limit?

Master these, and you'll outperform 90% of AI users.

Tools I'd Actually Miss If They Ghosted Me

  • Claude — My thinking partner

  • Perplexity — Research with citations

  • Taskade — All-in-one workspace with AI that writes tasks, creates project outlines, and keeps teams synced

  • Rezzy — Worth checking out for reservations and scheduling

Quick Hits — Tools & Updates

Photoshop now supports third-party AI models You can use Nano Banana Pro, Flux, and others for generative fill — not just Adobe Firefly. This signals a shift toward model-agnostic creative workflows.

Google adds video verification to Gemini You can now see if content was created with Google AI tools. Transparency features are becoming standard.

U.S. Air Force retires in-house chatbot for government-grade AI They're moving to GenAI.mil on Gemini Gov. AI as strategic operational infrastructure is here.

Japan's Generative AI Award 2025 Recognition ecosystems are validating innovation outside major tech hubs. The AI conversation is going global.

China's draft AI chatbot rules Focus on user psychological safety — an emerging sector of AI safety applications that will shape global standards.

Scribe raises $75M Series C Enterprise documentation automation is scaling. AI is streamlining how companies capture workflows and operational intelligence.

Join the AI Citizen Movement

My mission is simple: help our community go from AI ambition to implementation.

Not just understanding AI, but actually using it to build, create, and lead.

If this issue hit different for you, forward it to one person who needs to see Black innovation getting the spotlight it deserves.

What I want to hear from you:

  • What would YOUR Year End Spotify Wrapped look like? (Hit reply!)

  • Do you use AI for mental health or wellness? How?

  • What topics do you want me to cover in future issues?

Let's build the future with intention.

— Jeneba

Reply

or to participate

Recommended for you

No posts found