AI Posts

  • Leveraging AI Personas for Comprehensive Document Feedback

    AI

    Receiving timely and relevant feedback is crucial for improving content quality. Recently, a colleague mentioned how beneficial it would be to have “personas on demand” for feedback. This sparked an idea to create a system that could provide detailed evaluations from multiple perspectives. This was a perfect opportunity to use my local LLM setup to provide immediate feedback on a Redbooks publication that was just released yesterday, the IBM z17 Technical Introduction.

  • Creating a Powerful Document Processing App with DocRAG

    AI

    In today’s data-driven world, managing and processing documents efficiently is crucial. One of the most common challenges is converting PDFs into formats that are easily searchable and integrable with Local Language Models (LLMs). This blog post details how I used Cursor to create an app called DogRAG, which allows you to convert PDFs into markdown, txt, and JSON files for easy Retrieval-Augmented Generation (RAG) using your local LLM system.

  • From PDFs to Personalized AI: Building a Custom RAG System for IBM Redbooks

    AITutorial

    In the world of enterprise IT, technical documentation is both invaluable and overwhelming. IBM Redbooks, the gold standard for in-depth technical guides on IBM products, contain thousands of pages of expert knowledge. But how do we transform these static PDFs into dynamic, queryable knowledge bases? Today, I’d like to share a journey of building a custom Retrieval-Augmented Generation (RAG) system specifically for IBM technical documentation. This project demonstrates how modern AI techniques can unlock the knowledge trapped in technical PDFs and make it accessible through natural language queries. The Challenge: Unlocking Technical Knowledge IBM Redbooks are comprehensive technical guides, often hundreds of pages long, covering complex systems like IBM Z mainframes, cybersecurity solutions, and enterprise storage. These documents are treasure troves of information but present several challenges:

  • Google AI Mode vs. Perplexity: A Comparison for Flask Documentation

    AI

    Google recently launched their experimental AI Mode in Search, and as someone working on a BeeAI framework project with Flask as a front end, I wanted to compare how Google’s new offering stacks up against Perplexity when searching for Flask information.

  • Building a Recipe Creator with BeeAI Framework: A Comprehensive Tutorial

    PythonAITutorial

    In this tutorial, we’ll build a practical multi-agent system using the BeeAI framework that can create recipes based on user-provided ingredients. Our Recipe Creator will demonstrate how specialized agents can work together to accomplish a complex task.

  • From Idea to Implementation: Building a Recipe Creator with BeeAI - A Development Journey

    PythonAITutorial

    In this tutorial, I’ll share the iterative process of developing a Recipe Creator application using the BeeAI framework. Rather than presenting a polished, final product, I want to walk through the actual development journey that Claude (my AI coding partner) and I embarked on together. We practiced what I like to call “vibe coding” - a collaborative process where I guided the conceptual direction while Claude helped implement and troubleshoot the technical details. This post highlights the challenges we faced, the solutions we discovered, and the insights we gained along the way as a human-AI coding team.

  • Navigating the AI Wave: Career Choices and the Future of Knowledge Work

    AI

    In a recent interview on Hardfork, hosted by Kevin Roose and Casey Newton, Dario Amodei from Anthropic shared some profound insights about the future of work in the age of AI. As someone who creates content by interacting with engineering teams at IBM Redbooks, I found his perspectives both enlightening and somewhat unsettling.

  • Unveiling the Magic: How Large Language Models Handle Conversations

    AI

    When I first started interacting with large language models (LLMs) like those powering Claude and Gemini, I was struck by how seamlessly they maintained context in conversations. It felt as if the model remembered our entire exchange, allowing for a natural back-and-forth dialogue. However, this perception is an illusion—one that masks a fascinating mechanism behind the scenes.

  • Running OpenWebUI with Podman: A Corporate-Friendly LLM Setup

    TutorialAI

    While setting up local LLMs has become increasingly popular, many of us face restrictions on corporate laptops that prevent using Docker. Here’s how I successfully set up OpenWebUI using Podman on my IBM-issued MacBook, creating a secure and IT-compliant local AI environment.

  • The Productivity Dilemma: Why More Speed Isn't Always the Answer

    AI

    Lately, I’ve been watching some videos from Matthew Berman and I’ve realy enjoyed them. They’re great deep dives into various AI topics.