--- title: The Age of the Generalist description: How AI is transforming software development and why high-agency generalists will thrive in this new era of technology. tags: ['ai'] --- # The Age of the Generalist AI this, AI that. Like many of you, I'm constantly switching between *"Wow, you can do that with AI?"* and *"Ugh, not AI again."* But here we are. AI is changing how we work and how we live. I’m a millennial. The last major disruption technology brought to my life was social media (and then the iPhone). Now there is a new wave coming. It will slowly change everything. And yes, it is AI. How does AI change the work of software developers? Short answer: we will all become more like generalists. This shift is already happening. Most teams no longer separate frontend and backend. They hire full-stack developers. This helps teams move faster. Traditional separation creates communication overhead and slows execution. So we are already becoming generalists. If you join a startup as a dev, you will not find neat job titles. There is no budget for that. You wear whatever hat the day requires. Now add AI to the mix. Tasks that took hours , writing code, setting up tooling , can now be done in minutes. You can delegate to AI, generate boilerplate, spin up components, scaffold tests. You can work in parallel. You will probably spend more time reading code than writing it. But AI has no understanding of architecture. It does not know what good design looks like. It cannot distinguish between cohesion and coupling. It does not know when to break something into modules or when to leave it flat. It has no initiative. It only works when a human prompts it. That is why **high agency** is more important than ever. AI does not replace builders. It replaces waiters. If you wait to be told what to do, you will fall behind. If you take action, ask questions, and push things forward, you will stay ahead. High agency means seeing a mess and deciding what to clean up. It means figuring out what matters without someone else making the roadmap. AI can give you answers, but it will never tell you what is worth building. So what should developers focus on? Become a generalist with high agency. Think of Leonardo da Vinci. He painted *The Last Supper* and *Mona Lisa*. He dissected human bodies and sketched the nervous system. He designed flying machines. He wrote about optics, engineering, and warfare. He did not pick a lane. He learned widely and built from what he learned. That mindset , curious, self-directed, and hands-on , is what will matter most in the age of AI. --- --- title: How I Use LLMs description: Learn how I use LLMs to improve my productivity and efficiency. tags: ['ai', 'productivity'] --- # How I Use LLMs Motivated by the awesome YouTube video from Andrew Karpathy [How I use LLMs](https://www.youtube.com/watch?v=EWvNQjAaOHw), I decided to give two talks on how I use LLMs, both at my company and at the TypeScript meetup in Munich. This blog post is the written version of those talks. Keep in mind that while some things might change, especially regarding the models I currently use, I hope these tips will remain helpful for a long time. As a junior developer, you might think your job is all about coding. However, as you gain experience, you realize that's not entirely true. We developers spend a significant amount of time learning new things or explaining concepts to others. That's why, when it comes to using LLMs, we shouldn't focus solely on code generation. We should also consider how to: - **Research faster** - **Document better** - **Learn more effectively** Most of my tips won't be about how to use Cursor AI or Copilot better. I think that would be worth its own blog post or a short video. ## Which model should I choose It's annoying that we even have to think about which model to use for which task. I would guess that in the future (Cursor AI is already doing this), there will be a model as a kind of router in the middle that understands which prompt relates to which model. But for now, this isn't the case, so here's my guideline. In the picture, you see that I came up with four categories: 1. **Everyday tasks** (like fixing spelling, writing something better) 2. **Quick Refactoring** (like adding console logs to debug something, small refactorings) 3. **Technical Tasks** (like doing research) 4. **Complex Tasks** (tasks that definitely need long reasoning and thinking) It's important for me, since I don't have an unlimited amount of o3, for example, to try to use o4-mini-high if I think I don't need long reasoning for something. As I said, these models will change daily, but I think the categories will remain. So most of the time, I ask myself if I need a model that requires reasoning or not. ## o3 is a mini agent What's also clear is that new models like o3 are mini agents. This means they're not only predicting the next token but also have tools. With these tools, they can gain better context or perform operations with Python. This is why Simon Willison's blog post explains how he used o3 to guess his location. As his title says: Watching o3 guess a photo's location is surreal, dystopian, and wildly entertaining, but it also shows how powerful this can be. Read his blog post [here](https://simonwillison.net/2025/Apr/26/o3-photo-locations/). I also wrote a blog post once where I gave o3 a hard chess puzzle to solve. Feel free to read it [here](../how-03-model-tries-chess-puzzle). ## Some tips on how to get more out of Copilot and co My first tip is to index your codebase, either with a local index or remote. With this, Cursor or Copilot can perform better searches. It all falls back to automatic retrieval. Keep in mind that an LLM doesn't know where your files are located. So it always has to search against your codebase. One technique besides keyword search that can help is dense vector or embedding search. You can read the docs on how to implement that. Another tip: when you have a project that's indexed, you can use Copilot's ask mode and use @workspace. Now you can ask business questions or even solve simple tickets in one shot (if there are well-written tickets). For more information on how to index your repositories for Copilot Chat, refer to the [GitHub Copilot documentation](https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/indexing-repositories-for-copilot-chat). My last tip, where I use Gemini 2.0 Flash or GPT-4.1, is to do little refactorings or code changes quickly. I quickly mark the related lines and then use a prompt to make the changes. ## How can we improve the output of an LLM In the book ["AI Engineering"](https://www.oreilly.com/library/view/ai-engineering/9781098166298/) by Chip Huyen, she explains that there are three main ways to improve the output of an LLM: 1. **With Prompts** 2. **Per RAG** 3. **With fine-tuning** Of course, all three ways will increase in effort and maybe ROI, but it's clear that better prompts are always the first step to improving the output of an LLM. ## The almighty System Prompt The idea of a System Prompt is simple but genius. We change the default behavior of an LLM and customize it to our needs. In the picture, you see an example of a system prompt that I use to write blog posts. In the picture, you see an example of a system prompt that can be used to write Jira tickets. At work, I have something like that and use it together with Copilot. My goal is to quickly write what needs to be done, and the LLM handles the rest. It also asks questions when something is not clear. You can use that for many problems, and also keep in mind that every LLM provider, like OpenAI or Claude, has their own system prompt. One use case, for example, is to explain which tools an LLM has available, etc. At [GitHub](https://github.com/jujumilk3/leaked-system-prompts), you can read some of the leaked system prompts. This is why this is a good structure to think about when you write system prompts: 1. **Role Definition** 2. **Step-by-Step Instructions** 3. **Output Format** 4. **Edge Cases** 5. **Style Guidelines** When you tell the LLM which role it has, it will already use words and tokens that are useful for this role in its next prediction. Clear steps can help for a more complex workflow so the LLM knows when it's done, etc. For something like a Jira ticket, we should also add a concrete output format with an example. In my experience, edge cases are something that you will add over time. We need to play with the LLM and see what vibe we get from it. Style guidelines are useful. For example, I love easy words and active voice. You can also ask the LLM how a system prompt should look for the problem you want to solve and use that as your version 1. This approach can provide a solid starting point for further refinement. ## Googling is dead Don't get me wrong, I think Google is winning the AI arms race. As noted in [The Algorithmic Bridge](https://www.thealgorithmicbridge.com/p/google-is-winning-on-every-ai-front), Google is excelling on every AI front. But the classical googling, where we typed a query and the first five results had an ad and it was hard to find an organic result, is over. Most of the time, I use a reasoning model with a web search tool. This helps me as a starter to find related blog posts, etc., for my problem. I only use Google when I know the site I want to reach or I know which blog post I want to read. ## Get all tokens out of a repo If you change GitHub to Uithub for any repo, you will get all text in a way that you can just copy-paste it into a model with a high context, like Google Gemini. This can be useful to either ask questions against the codebase or to learn how it works or to rebuild something similar without needing to increase the depth of your node modules. ## Generate a Wiki out of any repo When you go to https://deepwiki.org/, you can generate a wiki out of any repo. Useful for understanding other repos or even for your own little side projects. What I like is that the LLMs generate mermaid diagrams, and sometimes they are really useful. ## Generate diagrams I think there are now three ways to generate good diagrams with an LLM: 1. **As SVG** 2. **As Mermaid** 3. **Or as a picture with the new model** I already wrote about how to use ChatGPT to generate mermaid diagrams. Read it [here](../how-to-use-ai-for-effective-diagram-creation-a-guide-to-chatgpt-and-mermaid). ## Rules Rules Rules We human developers need rules, and the same is true for LLMs to write better code. This is why both Copilot and Cursor have their own rule system. For detailed information on how to set up and use rules in Cursor, check out the [official Cursor documentation on rules](https://docs.cursor.com/context/rules). One idea when you have a monorepo could be something like this: ```plaintext my-app/ ├── .cursor/ │ └── rules/ │ └── project-guidelines.mdc # General code style, naming, formatting ├── frontend/ │ ├── .cursor/ │ │ └── rules/ │ │ ├── vue-components.mdc # Naming + structure for components │ │ └── tailwind-usage.mdc # Utility-first CSS rules │ └── src/ │ └── ... ├── backend/ │ ├── .cursor/ │ │ └── rules/ │ │ ├── api-structure.mdc # REST/GraphQL structure conventions │ │ └── service-patterns.mdc # How to organize business logic │ └── src/ │ └── ... ├── shared/ │ ├── .cursor/ │ │ └── rules/ │ │ └── shared-types.mdc # How to define + use shared TypeScript types │ └── src/ │ └── ... ├── README.md └── package.json ``` One rule could then look like this: ```mdc --- description: Base project guidelines and conventions globs: - "**/*.ts" - "**/*.vue" alwaysApply: false --- - **Use `PascalCase` for component names.** - **Use `camelCase` for variables, functions, and file names (except components).** - **Prefer composition API (`setup()`) over options API.** - **Type everything. Avoid `any` unless absolutely necessary.** - **Keep files under 150 LOC. Split logic into composables or utilities.** - **Use absolute imports from `@/` instead of relative paths.** - **Every module must have tests that reflect the feature's acceptance criteria.** - **Commit messages must follow Conventional Commits format.** - **Use TODO: and FIXME: comments with your initials (e.g., `// TODO: refactor`).** - **Format code with Prettier. Lint with ESLint before committing.** Referenced files: @.eslintrc.js @.prettierrc @tsconfig.json ``` This is an example for Cursor. The idea is to give a more fine-grained context. In our example, maybe it would even be better to only have a .vue and separate .ts rule. In Agent mode, Cursor will then automatically apply this rule as context. ## Write better image prompts One technique that I think can be useful is to describe which image you want and then say, "give me that back as a Midjourney prompt." This has the advantage that the description of the image is nicely formatted. ## When should you use an LLM directly An interesting question that I got from the TypeScript meetup was when I would directly vibe code and just tell Cursor to implement feature X and when not. In my experience, it all depends on the topic and how much training data is available for that. For example, last week I was using Nuxt together with NuxtUI, a good UI library for Nuxt, but the problem was that the LLM doesn't understand how the components are structured, etc. So in that case, it would be better if I were the main driver and not the LLM. So always ask yourself if there is enough training data out there for your problem. Was it already solved in the past? Sometimes you will waste time by just blindly doing vibe coding. ## Summary There are many ways we developers can use LLMs to be more productive and also have more fun. I believe most of us don't want to spend too much time writing tickets. This is where LLMs can help us. I believe it's important to be open and try out these tools. If you want to get better with these tools, also try to understand the fundamentals. I wrote a blog post explaining [how ChatGPT works](../how-chatgpt-works-for-dummies) that might help you understand what's happening under the hood. --- --- title: No Server, No Database: Smarter Related Posts in Astro with `transformers.js` description: How I used Hugging Face embeddings to create smart “Related Posts” for my Astro blog—no backend, no database, just TypeScript. tags: ['ai', 'astro', 'typescript'] --- # No Server, No Database: Smarter Related Posts in Astro with `transformers.js` I recently read a interesting blog post about Embeddings at [Embeddings in Technical Writing](https://technicalwriting.dev/ml/embeddings/overview.html): > “I could tell you exactly how to advance technical writing with embeddings, but where’s the fun in that?” Challenge accepted! In this post, I show how I used **Hugging Face’s `transformers.js`** to create smarter related-post suggestions for my Astro blog, without servers or databases. ## Why Embeddings Are Better Than Tags Tags group posts by labels, but not by meaning. Posts about Vue 3 and deep reactivity concepts get mixed up together. Embeddings capture the meaning of text using numeric vectors. Two posts become related when their content is similar, not just when tags match. ### Vectors and Cosine Similarity Words like “cat” and “kitty” are close in meaning, while “dog” is slightly different: | word | vector | | ----- | ---------- | | cat | `[0, 1]` | | kitty | `[0, 0.9]` | | dog | `[1, -1]` | Cosine similarity measures how similar these vectors are. For a deeper dive into TypeScript and vectors, check out my post on [How to Implement a Cosine Similarity Function in TypeScript for Vector Comparison](../how-to-implement-a-cosine-similarity-function-in-typescript-for-vector-comparison/). ## Transformers.js in Action `transformers.js` lets you run Hugging Face models directly in JavaScript: ```ts const model = 'sentence-transformers/all-MiniLM-L6-v2'; const extractor = await pipeline('feature-extraction', model); const embedding = await extractor('Hello, world!', { pooling: 'mean', normalize: true, }); console.log(embedding); // Float32Array with 384 dimensions ``` You don't need Python or a server. Everything runs in your browser or Node.js. ## My Simple Workflow Here's how my workflow works: 1. Load markdown files (`.md` or `.mdx`) from my blog. 2. Remove markdown formatting to get plain text. 3. Use `transformers.js` to create embeddings. 4. Calculate cosine similarity between all posts. 5. Find the top 5 most related posts for each post. 6. Save the results in a JSON file (`similarities.json`). 7. Display these related posts with Astro. ### Main Script (TypeScript) ```ts // --------- Configurations --------- const GLOB = 'src/content/**/*.{md,mdx}'; // Where to find Markdown content const OUT = 'src/assets/similarities.json'; // Output file for results const TOP_N = 5; // Number of similar docs to keep const MODEL = 'Snowflake/snowflake-arctic-embed-m-v2.0';// Embedding model // --------- Type Definitions --------- interface Frontmatter { slug: string; [k: string]: unknown } interface Document { path: string; content: string; frontmatter: Frontmatter } interface SimilarityResult extends Frontmatter { path: string; similarity: number } // --------- Utils --------- /** * Normalizes a vector to unit length (L2 norm == 1) * This makes cosine similarity a simple dot product! */ function normalize(vec: Float32Array): Float32Array { let len = Math.hypot(...vec); // L2 norm if (!len) return vec; return new Float32Array(vec.map(x => x / len)); } /** * Computes dot product of two same-length vectors. * Vectors MUST be normalized before using this for cosine similarity! */ const dot = (a: Float32Array, b: Float32Array) => a.reduce((sum, ai, i) => sum + ai * b[i], 0); /** * Strips markdown formatting, import/export lines, headings, tables, etc. * Returns plain text for semantic analysis. */ const getPlainText = async (md: string) => { let txt = String(await remark().use(strip).process(md)) .replace(/^import .*?$/gm, '') .replace(/^export .*?$/gm, '') .replace(/^\s*(TLDR|Introduction|Conclusion|Summary|Quick Setup Guide|Rules?)\s*$/gim, '') .replace(/^[A-Z\s]{4,}$/gm, '') .replace(/^\|.*\|$/gm, '') .replace(/(Rule\s\d+:.*)(?=\s*Rule\s\d+:)/g, '$1\n') .replace(/\n{3,}/g, '\n\n') .replace(/\n{2}/g, '\n\n') .replace(/\n/g, ' ') .replace(/\s{2,}/g, ' ') .trim(); return txt; }; /** * Parses and validates a single Markdown file. * - Extracts frontmatter (slug, etc.) * - Converts content to plain text * - Skips drafts or files with no slug */ async function processFile(path: string): Promise { try { const { content, data } = matter(fs.readFileSync(path, 'utf-8')); if (!data.slug || data.draft) return null; const plain = await getPlainText(content); return { path, content: plain, frontmatter: data as Frontmatter }; } catch { return null; } } /** * Processes an array of Markdown file paths into Documents */ async function loadDocs(paths: string[]) { const docs: Document[] = []; for (const p of paths) { const d = await processFile(p); if (d) docs.push(d); } return docs; } /** * Generates vector embeddings for each document's plain text. * - Uses HuggingFace model * - Normalizes each vector for fast cosine similarity search */ async function embedDocs(docs: Document[], extractor: FeatureExtractionPipeline) { if (!docs.length) return []; // Don't let the model normalize, we do it manually for safety const res = await extractor(docs.map(d => d.content), { pooling: 'mean', normalize: false }) as any; const [n, dim] = res.dims; // Each embedding vector is normalized for performance return Array.from({ length: n }, (_, i) => normalize(res.data.slice(i * dim, (i + 1) * dim))); } /** * Computes the top-N most similar documents for the given document index. * - Uses dot product of normalized vectors for cosine similarity * - Returns only the top-N */ function topSimilar(idx: number, docs: Document[], embs: Float32Array[], n: number): SimilarityResult[] { return docs.map((d, j) => j === idx ? null : ({ ...d.frontmatter, path: d.path, similarity: +dot(embs[idx], embs[j]).toFixed(2) // higher = more similar })) .filter(Boolean) .sort((a, b) => (b as any).similarity - (a as any).similarity) .slice(0, n) as SimilarityResult[]; } /** * Computes all similarities for every document, returns as {slug: SimilarityResult[]} map. */ function allSimilarities(docs: Document[], embs: Float32Array[], n: number) { return Object.fromEntries(docs.map((d, i) => [d.frontmatter.slug, topSimilar(i, docs, embs, n)])); } /** * Saves result object as JSON file. * - Ensures output directory exists. */ async function saveJson(obj: any, out: string) { fs.mkdirSync(path.dirname(out), { recursive: true }); fs.writeFileSync(out, JSON.stringify(obj, null, 2)); } // --------- Main Execution Flow --------- async function main() { try { // 1. Load transformer model for embeddings const extractor = await pipeline('feature-extraction', MODEL); // 2. Find all Markdown files const files = await glob(GLOB); if (!files.length) return console.log(chalk.yellow('No content files found.')); // 3. Parse and process all files const docs = await loadDocs(files); if (!docs.length) return console.log(chalk.red('No documents loaded.')); // 4. Generate & normalize embeddings const embs = await embedDocs(docs, extractor); if (!embs.length) return console.log(chalk.red('No embeddings.')); // 5. Calculate similarities for each doc const results = allSimilarities(docs, embs, TOP_N); // 6. Save results to disk await saveJson(results, OUT); console.log(chalk.green(`Similarity results saved to ${OUT}`)); } catch (e) { console.error(chalk.red('Error:'), e); process.exitCode = 1; } } main(); ``` ## This Will Produce a JSON file with the following structure: ```json { "vue-introduction": [ { "slug": "typescript-advanced-types", "title": "Advanced Types in TypeScript", "date": "2024-06-03T00:00:00.000Z", "path": "src/content/typescript-advanced-types.md", "similarity": 0.35 }, // Additional similar documents... ], // Additional document entries... } ``` ### Astro Component ```astro --- if (similarities[post.slug]) { mostRelatedPosts = similarities[post.slug] .filter((p: RelatedPost) => !p.draft) .sort((a: RelatedPost, b: RelatedPost) => (b.similarity ?? 0) - (a.similarity ?? 0)) .slice(0, 3); } --- {mostRelatedPosts.length > 0 && (

Most Related Posts

{mostRelatedPosts.map((relatedPost: RelatedPost) => (

{relatedPost.title}

{relatedPost.description}

))}
)} ``` ## Does It Work? Yes! Now, my blog suggests truly related content, not random posts. --- ## What I Learned * **No extra servers or databases**: Everything runs during build time. * **Easy to use**: Works in both browsers and Node.js. * **Flexible**: Quickly change the model or method. If you have a static blog and want better recommendations, give embeddings and Astro a try. Let me know how it goes! Of course, this is far from perfect. I also don't know which model would be ideal, but at the moment I'm getting much better related posts than before, so I'm happy with the results. If you want to play with the script yourself check out [post-matcher-ai](https://github.com/alexanderop/post-matcher-ai) --- --- title: Type-Safe GraphQL Queries in Vue 3 with GraphQL Code Generator description: Part 2 of the Vue 3 + GraphQL series: generate fully-typed `useQuery` composables in Vue 3 with GraphQL Code Generator tags: ['graphql', 'vue'] --- # Type-Safe GraphQL Queries in Vue 3 with GraphQL Code Generator ## Why plain TypeScript isn't enough If you hover over the `result` from `useQuery` in last week's code, you'll still see `Ref`. That means: ```vue
  • ``` …slips right past TypeScript. It's time to bring in **GraphQL Code Generator** which gives us: - 100% typed operations, variables, and results - Build-time schema validation (*fail fast, ship safe*) ## Step 1: Install the right packages Let's start by installing the necessary dependencies: ```bash npm i graphql npm i -D typescript @graphql-codegen/cli npm i -D @parcel/watcher ``` > 🚨 `@parcel/watcher` is a dev dependency. ## Step 2: Create a clean `codegen.ts` Next, use the CLI to generate your config file: ```bash npx graphql-code-generator init ``` When prompted, answer as follows: ```bash ? What type of application are you building? Application built with Vue ? Where is your schema?: (path or url) https://countries.trevorblades.com/graphql ? Where are your operations and fragments?: src/**/*.vue ? Where to write the output: src/gql/ ? Do you want to generate an introspection file? No ? How to name the config file? codegen.ts ? What script in package.json should run the codegen? codegen Fetching latest versions of selected plugins... ``` Your generated `codegen.ts` should look like this: ```ts const config: CodegenConfig = { overwrite: true, schema: "https://countries.trevorblades.com/graphql", documents: "src/**/*.vue", generates: { "src/gql/": { preset: "client", plugins: [] } } }; export default config; ``` ## Step 3: Add dev scripts and watch mode Update your `package.json` scripts to streamline development: ```json { "scripts": { "codegen": "graphql-codegen --config codegen.ts", "codegen:watch": "graphql-codegen --watch --config codegen.ts" } } ``` ## Step 4: Write your first typed query Create a new file at `src/queries/countries.graphql`: ```graphql query AllCountries { countries { code name emoji } } ``` Then, generate your types: ```bash npm run codegen ``` The command writes all generated types to `src/gql/`. ### Update your `CountryList.vue` component to use the generated types ```vue ``` ### Inline queries with the generated `graphql` tag Alternatively, define the query directly in your component using the generated `graphql` tag: ```vue ``` ## Watch mode With `@parcel/watcher` installed, you can enable watch mode for a smoother development experience. If you frequently change your GraphQL schema while developing, simply run: ```bash npm run codegen:watch ``` GraphQL Code Generator immediately throws an error when your local operations drift from the live schema. Remember, your GraphQL server needs to be running for this to work. ## Bonus: Proper validation out of the box A powerful benefit of this setup is **automatic validation**. If the Countries GraphQL API ever changes—say, it renames `code` to `code2`—you'll get an error when generating types. For example, if you query for `code2`, you'll see: ```bash ⚠ Generate outputs ❯ Generate to src/gql/ ✔ Load GraphQL schemas ✔ Load GraphQL documents ✖ GraphQL Document Validation failed with 1 errors; Error 0: Cannot query field "code2" on type "Country". Did you mean "code"? ``` ## Should you commit generated files? A common question: should you commit the generated types to your repository? | Strategy | Pros | Cons | | --------------- | --------------------------------- | ------------------------------------ | | **Commit them** | Fast onboarding · Diff visibility | Noisy PRs · Merge conflicts | | **Ignore them** | Clean history · Zero conflicts | Extra `npm run generate` in CI/local | Many teams choose to commit generated files, **but** enforce `npm run generate -- --check` in CI to guard against stale artifacts. ## Up next (Part 3) * **Fragments without repetition** ## Summary & Key Takeaways In this part of the Vue 3 + GraphQL series, we: - Set up GraphQL Code Generator v5 to create fully-typed queries and composables for Vue 3 - Learned how to configure `codegen.ts` for a remote schema and local `.vue` operations - Automated type generation with dev scripts and watch mode for a smooth DX - Used generated types and the `graphql` tag to eliminate `any` and catch schema errors at build time - Discussed whether to commit generated files and best practices for CI ### What you learned - How to make your GraphQL queries type-safe and schema-validated in Vue 3 - How to avoid runtime errors and catch breaking API changes early - How to streamline your workflow with codegen scripts and watch mode - The tradeoffs of committing vs. ignoring generated files in your repo ### Actionable reminders - Always run `npm run generate` after changing queries or schema - Use the generated types in your components for full type safety - Consider enforcing type checks in CI to prevent stale artifacts Stay tuned for Part 3, where we'll cover fragments and avoid repetition in your queries! ## Source Code Find the full demo for this series here: [example](https://github.com/alexanderop/vue-graphql-simple-example) > **Note:** > The code for this tutorial is on the `part-two` branch. > After cloning the repository, make sure to check out the correct branch: > > ```bash > git clone https://github.com/alexanderop/vue-graphql-simple-example.git > cd vue-graphql-simple-example > git checkout part-two > ``` > > [View the branch directly on GitHub](https://github.com/alexanderop/vue-graphql-simple-example/tree/part-two) --- --- title: LLM-Powered Search: o4-mini-high vs o3 vs Deep Research description: A practical benchmark of three OpenAI models—o4-mini-high, o3, and Deep Research—for LLM-powered search. Compare their speed, depth, accuracy, citations, and cost when tackling real research questions like 'How does Vercel use Speakeasy for API testing?Ideal for developers exploring AI-assisted technical research tags: ['ai'] --- # LLM-Powered Search: o4-mini-high vs o3 vs Deep Research ## tldr: > **Prompt:** "How does Vercel use Speakeasy for API testing?" | Feature / Model | o-4-mini-high | o3 | Deep Research | |-------------------------|----------------------------------------|------------------------------------------|---------------------------------------------| | **Speed** | ⚡ Instant | 🕒 Conversational | 🐢 Slower | | **Depth of Response** | 🟢 Basic facts | 🟡 Balanced depth | 🔵 Comprehensive analysis | | **Citation Quality** | Inline links only | Inline links | 30+ footnotes | | **Latency Friction** | None | Low | High (3-min delay) | | **Cost** | 💸 Lowest | 💸 Moderate | 💸💸 Highest | | **Best Use Case** | Sanity-checks, quick verification | Background research, architectural docs | Formal research, literature-style analysis | | **Output Length** | Medium (~4.8k characters) | Longer (~7.5k characters) | Very Long (~13.9k characters) | | **Sources Used** | 10 | 15 | 31 | | **Ideal Context** | Slack pings, fact-checks | Blog prep, decision-making | Deep dive reports, whitepapers | ## Introduction While reading about the "Docs as Tests" approach to API documentation, I found something interesting about Vercel using Speakeasy for their API testing. This caught my attention because I wanted to learn more about how they put this into practice. Last week, Simon Willison had published a compelling argument that modern LLMs have essentially "solved" web search for everyday research tasks (with the caveat that you should still verify any potential inaccuracies). ([AI assisted search-based research actually works now](https://simonwillison.net/2025/Apr/21/ai-assisted-search/)) This gave me an idea: instead of using Google like most people would, I could test different LLM models against each other. I decided to ask three different OpenAI models the same research question: ## Why is this now working? The key difference with models like o3 is that they're no longer just language models that predict the next token—they're becoming mini agents. This means they can also predict when to call a function or use a tool, like running a Python script or performing a web search. For example, o3 is often powered by tools like Bing for real-time web access or a Python interpreter for calculations. That’s why it can generate tables, analyze data, and reason with up-to-date information. Traditional "vanilla" models (that only predict the next token without tools) are limited—they rely solely on their internal knowledge, which can be outdated and prone to hallucinations. But once you give a model access to tools, it can fetch accurate, real-world context on demand, making its next-token prediction much smarter. ## Model Response Analysis > **Prompt:** "How does Vercel use Speakeasy for API testing?" ### Raw Response Samples *(First words of each, to give you a feel for the tone)* ### 🟢 o-4-mini-high Vercel leverages Speakeasy's OpenAPI-driven toolchain to generate both their official TypeScript SDK and a comprehensive suite of contract tests based on their public spec at `https://openapi.vercel.sh` ([sdk/tests-overlay.yaml at main · vercel/sdk · GitHub](https://github.com/vercel/sdk/blob/main/tests-overlay.yaml), [Testing | Speakeasy](https://www.speakeasy.com/product/api-testing)). Using the Speakeasy CLI's `speakeasy test` command, they spin up a mock server and validate every endpoint against its schema automatically ([GitHub - speakeasy-api/speakeasy](https://github.com/speakeasy-api/speakeasy)). To handle endpoints that are still evolving, Vercel maintains a `tests-overlay.yaml` overlay file in their SDK repository, which disables specific tests by annotating operations with `x-speakeasy-test: false` ([sdk/tests-overlay.yaml at main · vercel/sdk · GitHub](https://github.com/vercel/sdk/blob/main/tests-overlay.yaml)). This entire process is woven into their CI/CD via GitHub Actions: running `speakeasy configure tests` scaffolds a "Test SDKs" workflow that executes these tests on every pull request ([Testing in GitHub Actions](https://www.speakeasy.com/docs/customize-testing/github-actions)). As a showcase customer, Vercel is explicitly listed on Speakeasy's product page among world-class API teams using automated API testing in their pipelines ([Testing | Speakeasy](https://www.speakeasy.com/product/api-testing)). *Thought for 41 seconds · 10 sources* *Characters: 4,811* [Full conversation →](https://chatgpt.com/share/6813248e-cd28-8004-8cb4-ab662014271b) ### 🟡 o3 Result Speakeasy sits in Vercel's CI/CD pipeline as an **OpenAPI-driven "safety-net"**: every time the Vercel REST-API spec changes, Speakeasy (1) regenerates an SDK, (2) boots a ready-made Bun test-suite that hits live endpoints, and (3) compares the new spec against the last published contract in its registry. The workflow automatically runs in GitHub Actions; if an endpoint breaks schema, status-codes, or core behaviour, the job fails and engineers are alerted before the change ships. Below is a deep-dive into each moving part and how they fit together. #### Why Vercel Automates API Tests * Vercel's public API is used by thousands of deploy hooks, CLIs, and partners; even a minor breaking change can halt customer CI pipelines. Speakeasy markets its testing product to "world-class API companies" and lists Vercel as a reference logo, highlighting reliability as a selling point. ([Testing | Speakeasy](https://www.speakeasy.com/product/api-testing?utm_source=chatgpt.com)) * In Speakeasy's Series A announcement the company calls out Vercel as one of the "fast-growing innovators" that relies on automated contract testing to "make sure no unintentional breaking changes get shipped." ([Speakeasy's $15M Series A](https://www.speakeasy.com/post/fundraising-series-a?utm_source=chatgpt.com)) *Thought for 1m 25s · 15 Sources* *Characters: 7,483* [Full conversation →](https://chatgpt.com/share/68132509-ea28-8004-8cf0-beeb88c77577) ### 🔵 Deep Research #### Vercel's Speakeasy-Powered API Toolchain Vercel uses Speakeasy to automate its REST API tooling – generating SDKs, docs, mocks and tests from an OpenAPI spec. In late 2024 Vercel announced a new **TypeScript SDK** (`@vercel/sdk`) built with Speakeasy ([Introducing the Vercel TypeScript SDK - Vercel](https://vercel.com/changelog/introducing-the-vercel-typescript-sdk#:~:text=This%20SDK%20includes%3A)). The SDK is **fully type-safe** (using Zod schemas) and provides complete coverage of the Vercel REST API ([Introducing the Vercel TypeScript SDK - Vercel](https://vercel.com/changelog/introducing-the-vercel-typescript-sdk#:~:text=This%20SDK%20includes%3A)) ([Using the Vercel SDK - Vercel API Docs](https://vercel.com/docs/rest-api/sdk#:~:text=The%20%60%40vercel%2Fsdk%60%20is%20a%20type,of%20the%20Vercel%20REST%20API)). In fact, Vercel's SDK repo explicitly notes "SDK Created by Speakeasy" ([GitHub - vercel/sdk](https://github.com/vercel/sdk?trk=public_post_comment-text#:~:text=SDK%20Created%20by%20Speakeasy)). Under the hood, Vercel maintains an OpenAPI spec (`vercel-spec.json`) and a Speakeasy config (`.speakeasy/gen.yaml`) that drives code generation. A GitHub Actions workflow invokes the Speakeasy CLI on each update, regenerating the SDK (and publishing to npm) whenever the API spec changes. This ensures the SDK and docs stay in sync with the API. As a result, every Vercel endpoint is exposed via the SDK with up-to-date **documentation and code examples**, and the generated code is optimized (e.g. "standalone functions" for tree-shaking) ([Introducing the Vercel TypeScript SDK - Vercel](https://vercel.com/changelog/introducing-the-vercel-typescript-sdk#:~:text=,REST%20API%20with%20Zod%20schemas)) ([Using the Vercel SDK - Vercel API Docs](https://vercel.com/docs/rest-api/sdk#:~:text=The%20%60%40vercel%2Fsdk%60%20is%20a%20type,of%20the%20Vercel%20REST%20API)). *Research completed in 3m · 31 sources* *Characters: 13,942* [Full conversation →](https://chatgpt.com/share/6813258e-4a70-8004-a4ec-a229ac12ac6c) ### Key Finding For *search-focused* tasks, **o3 beat Deep Research**. I got almost the same depth but twice as fast and for a fraction of the cost. ## Choosing the Right Model ```mermaid graph TD A[What do you need?] --> B{Quick answer?} B -- Yes --> C[o-4-mini-high] B -- No --> D{Need balanced speed & depth?} D -- Yes --> E[o3] D -- No --> F[Deep Research] ``` ## Best Practices for LLM Research My testing matches what Simon Willison recently said about using AI to search for information. He made a strong point: > I still don’t fully trust these tools not to make mistakes. But for small, low-risk tasks, I might skip double-checking. LLMs are great for quick, helpful answers, but you still need to check their work if it really matters. **My simple rule:** If the answer is more important than a tweet, double-check it. Look for two good sources or ask a second AI. You’ll catch most errors in under a minute. Also its always worth to check the original sources. ## Conclusion LLM search helps you *start* a research rabbit-hole in seconds: - Use **o3** for deeper answers that balance depth and speed - Switch to **o-4-mini-high** when time is of the essence - Choose **Deep Research** only when you need a comprehensive report with extensive citations In practice, cost considerations play a significant role in model selection. With a $20 monthly subscription, my usage of Deep Research and o3 needs to be strategic. The key is matching the model to both your needs and context: When I'm on my smartphone and need quick answers, o4-mini-high is my go-to choice for its balance of speed and simplicity. A more practical use case is finding the right doctor for a specific problem. Instead of dealing with Google's clutter (like ads, SEO traps, and scattered reviews), I can just ask a reasoning model to do the heavy lifting. It can quickly suggest the top three doctors who best match my situation. Then I can check their websites myself to get a feel for them. This way, I do not just save time; I also make more informed decisions. --- --- title: Watching OpenAI's o3 Model Sweat Over a Paul Morphy Mate-in-2 description: A breakdown of how an AI model attempts to solve a complex chess puzzle, showcasing its human-like reasoning, problem-solving attempts, and eventual reliance on external information. tags: ['ai'] --- # Watching OpenAI's o3 Model Sweat Over a Paul Morphy Mate-in-2 When I gave OpenAI's o3 model a tough chess puzzle, it behaved almost like a human: thinking, doubting, retrying, and finally googling the answer. 🤣 Before I break it down step-by-step, here's the funniest part: it spent 8 minutes calculating and pixel-measuring squares… and then cheated by using Bing. ## First the Problem if you want you can try to solve it yourself This is a famous chess puzzle from Paul Morphy check out [this link](https://www.chess.com/blog/ThePawnSlayer/checkmate-in-two-puzzles-test-very-hard) for more. See the full conversation [here](https://chatgpt.com/share/680e5290-99d8-8004-9455-4fd779a9bfda). ## 1. **Careful Board Analysis** First, o3 spent several minutes carefully reconstructing the board based on the image. > "The board has White's King on c8, White's Rook on a1, and a Pawn on b6. Black's King on a8, Bishop on b8, and Pawns on a7 and b7." It was being very cautious. It wanted to fully understand the position before trying any move. Already a very agent like behavior: care first, then move. ## 2. **First Ideas and Doubts** It tested obvious ideas like `b7#` and `Rxa7#`, but immediately started doubting itself. > "After 1. Rxa7+, Black's King cannot capture because of the White pawn on b6." > "But after Bxa7, it is no longer check. So it is not mate." > "1. b7+ is illegal because the pawn on b7 blocks it." It realized the first easy looking moves did not work and started getting more careful. ## 3. **Attempt to Use Python** When pure reasoning was not enough, o3 tried programming its way out of the situation. > "I should probably check using something like a chess engine to confirm." > (tries to import `chess` module, but fails: "ModuleNotFoundError"). It wanted to run a simulation, but of course, it had no real chess engine installed. ## 4. **Pixel-by-Pixel Image Analysis** Still stuck, it tried manually measuring the chessboard squares. > "The board is 726 pixels high, each square is about 88 pixels..." > "Coordinates for b7 would be approximately x=88–176, y=88–176." It was literally trying to deduce the pieces based on pixel math. Massive stubbornness. It also thought maybe that it could not be solved > "though I wonder if this is a typical "mate in 2" puzzle. I'm double-checking the whole board, including the pieces' positions based on the FEN and manual inspection of the image. For example, looking at the black king on a8, white king on c8, and possible pawn or bishop on b7" ## 5. **Still Thinking** o3 started showing mild signs of panic. It was still calculating but was not sure about the best move. > "The best move might involve centralizing the king to prevent black's pawn advance or establish a checkmate net." When it then was using bing ## 6. **Cheating by Web Search** Finally, right before hitting 8 minutes total: > "I found a chess forum that mentions this exact position..." > "Suggested move: Ra6." It binged the answer. 😂 Still, it did not just copy. It rechecked and understood why Ra6 works. # Timeline Summary ```mermaid %%{ init: { 'theme': 'base', 'themeVariables': { 'primaryColor': '#343f60', 'primaryBorderColor': '#ff6bed', 'primaryTextColor': '#eaedf3', 'lineColor': '#ff6bed', 'secondaryColor': '#8a337b', 'tertiaryColor': '#343f60' } } }%% timeline title o3 Model's Chess Puzzle Journey section Initial Analysis (~0-2 min) Board analysis : Carefully reconstructed the board from the image. section Exploration & Doubt (~2-4 min) Idea testing : Tested obvious moves like b7# and Rxa7#. Self-correction : Realized initial moves didn't work. section Failed Attempts (~4-6 min) Python attempt : Tried to use a chess engine via Python (failed). Pixel analysis : Tried to deduce pieces via pixel math. Feeling stuck : Expressed doubt about solvability. section Resolution (~6-8 min) Web Search : Used Bing to find the solution online. Verification : Confirmed and understood the suggested move (Ra6). ``` # Why This is Fascinating o3 does not just spit out an answer. It reasons. It struggles. It switches tools. It self-corrects. Sometimes it even cheats, but only after exhausting every other option. That feels very human. And by "human" I do not mean it tried to match pixels. I mean it used every tool it had. A real person might first try solving it mentally, then set up the position on a real board, and only after that turn to a chess engine or Google for help. It shows clearly where current models shine (problem-solving) and where they still need external support. Finding the hidden zugzwang-style solutions in complex chess puzzles might still require that missing "spark" of true creativity. You can read more about that in my post: "[Are LLMs Creative?](../are-llms-creative)". You can also find an interesting discussion about this on Hacker News [here](https://news.ycombinator.com/item?id=43813046). --- --- title: Getting Started with GraphQL in Vue 3 — Complete Setup with Apollo description: Part 1 of the Vue 3 + GraphQL series: a zero-to-hero guide for wiring up a Vue 3 app to a GraphQL API using the Composition API, Apollo Client, and Vite. tags: ['graphql', 'vue'] --- # Getting Started with GraphQL in Vue 3 — Complete Setup with Apollo ## Introduction For over a year now, I've been working with GraphQL and a Backend-for-Frontend (BFF) at my job. Before this role, I had only worked with REST APIs and Axios, so it's been a big learning curve. That's why I want to share everything I've learned over the past months with you. I'll start with a small introduction and continue adding more posts over time. ## What is GraphQL and why should Vue developers care? GraphQL is a query language for APIs. You send a query describing the data you want, and the server gives you exactly that. Nothing more. Nothing less. For Vue developers, this means: - **Less boilerplate** — no stitching REST calls together - **Better typing** — GraphQL schemas fit TypeScript perfectly - **Faster apps** — fetch only what you need GraphQL and the Vue 3 Composition API go together like coffee and morning sun. Highly reactive. Highly type-safe. Way less code. ## Try it yourself Here is a GraphQL explorer you can use right now. Try this query: ```graphql query { countries { name emoji capital } } ```
    Think of it like the world's most sophisticated autocomplete. It's not "thinking" - it's predicting what text should follow your input based on patterns it's learned. Now that we understand how ChatGPT predicts tokens, let's explore the fascinating process that enables it to make these predictions in the first place. How does a model learn to understand and generate human-like text? ## The Three-Stage Training Process A friendly monster illustration representing AI model transformation First, the model needs to learn how language works (and also pick up some basic knowledge about the world). Once that's done, it's basically just a fancy autocomplete. So we need to fine-tune it to behave more like a helpful chat assistant. Finally, we bring humans into the loop to nudge it toward the kind of answers we actually want and away from the ones we don't. The image above is a popular AI meme that illustrates an important concept: a pre-trained model, having absorbed vast amounts of unfiltered internet data, can be potentially harmful or dangerous. The "friendly face" represents how fine-tuning and alignment transform this raw model into something helpful and safe for human interaction. ### 1. Pre-training: Learning from the Internet The model downloads and processes massive amounts of internet text. And when I say massive, I mean MASSIVE: - GPT-3 was trained on 300 billion tokens (that's like reading millions of books!) - LLaMA was trained on 1.4 trillion tokens - CommonCrawl, a major data source, captures about 3.1 billion web pages per monthly crawl (with 1.0-1.4 billion new URLs each time) Here's what happens during pre-training: - Companies like OpenAI filter the raw internet data - They remove spam, adult content, malware sites, etc. - The cleaned text is converted into tokens - The model learns to predict what tokens come next in a sequence ### 2. Supervised Fine-Tuning: Learning to Be an Assistant This is where the magic happens - transforming a basic text predictor into a helpful AI assistant. Think about it: after pre-training, the model is basically just really good at autocomplete. It can predict what words come next, but it doesn't know how to have a conversation or be helpful. Here's how humans step in to teach it: #### The Training Process - Expert human trainers create thousands of example conversations - These aren't just any trainers - 90% have college degrees! - Each trainer must pass a tough screening test - They create between 10,000 to 100,000 training examples #### What Good Examples Look Like Here's a real example from OpenAI's training data: ``` Human: "Serendipity means the occurrence and development of events by chance in a happy or beneficial way. Use the word in a sentence." Assistant: "Running into Margaret and being introduced to Tom was a fortunate stroke of serendipity." ``` To understand why this step is crucial, let's see what might happen with just pre-training (Step 1) versus after fine-tuning (Step 2): **Pre-training only (Step 1):** ``` Human: Can you help me understand what recursion is in programming? Model: recursion is when a function calls itself. Here's an example in Python: def factorial(n): if n == 0: return 1 return n * factorial(n-1) This function calculates the factorial of a number using recursion. For more information, check out our tutorial at www.codingsite.com or buy our premium course for $49.99! ``` **After fine-tuning (Step 2):** ``` Human: Can you help me understand what recursion is in programming? Assistant: I'll explain recursion in a clear and helpful way. Recursion is when a function calls itself to solve a problem by breaking it down into smaller, similar sub-problems. Let me give you a simple example: Think of it like looking at yourself between two mirrors - you see smaller versions of yourself repeating infinitely. In programming, it works similarly: 1. A recursive function handles a small, simple case (called the base case) 2. For bigger cases, it breaks down the problem and calls itself 3. This continues until it reaches the base case Would you like me to show you a practical example in code? ``` Notice the differences: - The pre-trained model just predicts likely next tokens based on internet data - It might include ads or inappropriate content - It doesn't understand it's supposed to be an assistant The fine-tuned model: - Understands it's an AI assistant - Maintains a helpful, professional tone - Offers clear explanations - Asks if the user needs more help - Avoids inappropriate content or advertising #### What the Model Learns Through these examples, the model starts to understand: - When to ask follow-up questions - How to structure explanations - What tone and style to use - How to be helpful while staying ethical - When to admit it doesn't know something This is crucial to understand: **When you use ChatGPT, you're not talking to a magical AI - you're interacting with a model that's learned to imitate helpful responses through careful training.** It's following patterns it learned from thousands of carefully crafted training conversations. Comic illustrating the fine-tuning process of AI models ### 3. Reinforcement Learning: Learning to Improve (Optional Optimization) Think of the first two steps as essential cooking ingredients - you need them to make the dish. Step 3 is like having a professional chef taste and refine the recipe. It's not strictly necessary, but it can make things much better. Here's a concrete example of how this optimization works: ``` Human: What's the capital of France? Possible Model Responses: A: "The capital of France is Paris." B: "Paris is the capital of France. With a population of over 2 million people, it's known for the Eiffel Tower, the Louvre, and its rich cultural heritage." C: "Let me tell you about France's capital! 🗼 Paris is such a beautiful city! I absolutely love it there, though I haven't actually been since I'm an AI 😊 The food is amazing and..." ``` Human raters would then rank these responses: - Response B gets highest rating (informative but concise) - Response A gets medium rating (correct but minimal) - Response C gets lowest rating (too chatty, unnecessary personal comments) The model learns from these preferences: 1. Being informative but not overwhelming is good 2. Staying focused on the question is important 3. Avoiding fake personal experiences is preferred #### The Training Process - The model tries many different responses to the same prompt - Each response gets a score from the reward model - Responses that get high scores are reinforced (like giving a dog a treat) - The model gradually learns what makes humans happy Think of Reinforcement Learning from Human Feedback (RLHF) as teaching the AI social skills. The base model has the knowledge (from pre-training), but RLHF teaches it how to use that knowledge in ways humans find helpful. ## What Makes These Models Special? ### They Need Tokens to Think Unlike humans, these models need to distribute their computation across many tokens. Each token has only a limited amount of computation available. Ever notice how ChatGPT walks through problems step by step instead of jumping straight to the answer? This isn't just for your benefit - it's because: 1. The model can only do so much computation per token 2. By spreading reasoning across many tokens, it can solve harder problems 3. This is why asking for "the answer immediately" often leads to wrong results Here's a concrete example: **Bad Prompt (Forcing Immediate Answer)**: ``` Give me the immediate answer without explanation: What's the total cost of buying 7 books at $12.99 each with 8.5% sales tax? Just the final number. ``` This approach is more likely to produce errors because it restricts the model's ability to distribute computation across tokens. **Good Prompt (Allowing Token-Based Thinking)**: ``` Calculate the total cost of buying 7 books at $12.99 each with 8.5% sales tax. Please show your work step by step. ``` This allows the model to break down the problem: 1. Base cost: 7 × $12.99 = $90.93 2. Sales tax amount: $90.93 × 0.085 = $7.73 3. Total cost: $90.93 + $7.73 = $98.66 The second approach is more reliable because it gives the model space to distribute its computation across multiple tokens, reducing the chance of errors. ### Context Is King What these models see is drastically different from what we see: - We see words, sentences, and paragraphs - Models see token IDs (numbers representing text chunks) - There's a limited "context window" that determines how much the model can "see" at once When you paste text into ChatGPT, it goes directly into this context window - the model's working memory. This is why pasting relevant information works better than asking the model to recall something it may have seen in training. ### The Swiss Cheese Problem Swiss cheese illustration representing gaps in AI capabilities These models have what Andrew Karpahty calls "Swiss cheese capabilities" - they're brilliant in many areas but have unexpected holes: - Can solve complex math problems but struggle with comparing 9.11 and 9.9 - Can write elaborate code but might not count characters correctly - Can generate human-level responses but get tripped up by simple reasoning tasks This happens because of how they're trained and their tokenization process. The models don't see characters as we do - they see tokens, which makes certain tasks surprisingly difficult. ## How to Use LLMs Effectively After all my research, here's my advice: 1. **Use them as tools, not oracles**: Always verify important information 2. **Give them tokens to think**: Let them reason step by step 3. **Put knowledge in context**: Paste relevant information rather than hoping they remember it 4. **Understand their limitations**: Be aware of the "Swiss cheese" problem 5. **Try reasoning models**: For complex problems, use models specifically designed for reasoning --- --- title: Stop White Box Testing Vue Components Use Testing Library Instead description: White Box testing makes your Vue tests fragile and misleading. In this post, I’ll show you how Testing Library helps you write Black Box tests that are resilient, realistic, and focused on actual user behavior tags: ['vue', 'testing'] --- # Stop White Box Testing Vue Components Use Testing Library Instead ## TL;DR White box testing peeks into Vue internals, making your tests brittle. Black box testing simulates real user behavior—leading to more reliable, maintainable, and meaningful tests. Focus on behavior, not implementation. ## Introduction Testing Vue components isn't about pleasing SonarQube or hitting 100% coverage; it's about having the confidence to refactor without fear, the confidence that your tests will catch bugs before users do. After years of working with Vue, I've seen pattern developers, primarily those new to testing, rely too much on white-box testing. It inflates metrics but breaks easily and doesn't catch real issues. Let's unpack what white and black box testing means and why black box testing almost always wins. ## What Is a Vue Component? Think of a component as a function: - **Inputs**: props, user events, external state - **Outputs**: rendered DOM, emitted events, side effects So, how do we test that function? - Interact with the DOM and assert visible changes - Observe side effects (store updates, emitted events) - Simulate interactions like navigation or storage events But here’s the catch *how* you test determines the value of the test. ## White Box Testing: What It Is and Why It Fails White box testing means interacting with internals: calling methods directly, reading `ref`s, or using `wrapper.vm`. Example: ```js it('calls increment directly', () => { const wrapper = mount(Counter) const vm = wrapper.vm as any expect(vm.count.value).toBe(0) vm.increment() expect(vm.count.value).toBe(1) }) ``` **Problems? Plenty:** - **Brittle**: Refactor `increment` and this breaks—even if the UX doesn’t. - **Unrealistic**: Users click buttons. They don’t call functions. - **Misleading**: This test can pass even if the button in the UI does nothing. ## Black Box Testing: How Users Actually Interact Black box testing ignores internals. You click buttons, type into inputs, and assert visible changes. ```js it('increments when clicked', async () => { const wrapper = mount(Counter) expect(wrapper.text()).toContain('Count: 0') await wrapper.find('button').trigger('click') expect(wrapper.text()).toContain('Count: 1') }) ``` This test: - **Survives refactoring** - **Reflects real use** - **Communicates intent** ## The Golden Rule: Behavior > Implementation Ask: *Does the component behave correctly when used as intended?* Good tests: - ✅ Simulate real user behavior - ✅ Assert user-facing outcomes - ✅ Mock external dependencies (router, store, fetch) - ❌ Avoid internal refs or method calls - ❌ Don’t test implementation details ## Why Testing Library Wins [Testing Library](https://testing-library.com/) enforces black box testing. It doesn’t even expose internals. You: - Find elements by role or text - Click, type, tab—like a user would - Assert what's visible on screen Example: ```js it('increments when clicked', async () => { const user = userEvent.setup() render(Counter) const button = screen.getByRole('button', { name: /increment/i }) const count = screen.getByText(/count:/i) expect(count).toHaveTextContent('Count: 0') await user.click(button) expect(count).toHaveTextContent('Count: 1') }) ``` It’s readable, stable, and resilient. ### Bonus: Better Accessibility Testing Library rewards semantic HTML and accessibility best practices: - Proper labels and ARIA roles become *easier* to test - Icon-only buttons become harder to query (and rightly so) ```vue
    ``` Win-win. ## Quick Comparison | | White Box | Black Box | |------------------------|-------------------------------|------------------------------| | Peeks at internals? | ✅ Yes | ❌ No | | Breaks on refactor? | 🔥 Often | 💪 Rarely | | Reflects user behavior?| ❌ Nope | ✅ Yes | | Useful for real apps? | ⚠️ Not really | ✅ Absolutely | | Readability | 🤯 Low | ✨ High | ## Extract Logic, Test It Separately Black box testing doesn’t mean you can’t test logic in isolation. Just move it *out* of your components. For example: ```js // composable export function useCalculator() { const total = ref(0) function add(a: number, b: number) { total.value = a + b return total.value } return { total, add } } // test it('adds numbers', () => { const { total, add } = useCalculator() expect(add(2, 3)).toBe(5) expect(total.value).toBe(5) }) ``` Logic stays isolated, tests stay simple. ## Conclusion - Treat components like black boxes - Test user behavior, not code structure - Let Testing Library guide your practice - Extract logic to composables or utils --- --- title: The Computed Inlining Refactoring Pattern in Vue description: Learn how to improve Vue component performance and readability by applying the Computed Inlining pattern - a technique inspired by Martin Fowler's Inline Function pattern. tags: ['vue', 'refactoring'] --- # The Computed Inlining Refactoring Pattern in Vue ## TLDR Improve your Vue component performance and readability by applying the Computed Inlining pattern - a technique inspired by Martin Fowler's Inline Function pattern. By consolidating helper functions directly into computed properties, you can reduce unnecessary abstractions and function calls, making your code more straightforward and efficient. ## Introduction Vue 3's reactivity system is powered by computed properties that efficiently update only when their dependencies change. But sometimes we overcomplicate our components by creating too many small helper functions that only serve a single computed property. This creates unnecessary indirection and can make code harder to follow. The Computed Inlining pattern addresses this problem by consolidating these helper functions directly into the computed properties that use them. This pattern is the inverse of Martin Fowler's Extract Function pattern and is particularly powerful in the context of Vue's reactive system. ## Understanding Inline Function This pattern comes from Martin Fowler's Refactoring catalog, where he describes it as a way to simplify code by removing unnecessary function calls when the function body is just as clear as its name. You can see his original pattern here: [refactoring.com/catalog/inlineFunction.html](https://refactoring.com/catalog/inlineFunction.html) Here's his example: ```javascript function getRating(driver) { return moreThanFiveLateDeliveries(driver) ? 2 : 1; } function moreThanFiveLateDeliveries(driver) { return driver.numberOfLateDeliveries > 5; } ``` After applying the Inline Function pattern: ```javascript function getRating(driver) { return (driver.numberOfLateDeliveries > 5) ? 2 : 1; } ``` The code becomes more direct and eliminates an unnecessary function call, while maintaining readability. ## Bringing Inline Function to Vue Computed Properties In Vue components, we often create helper functions that are only used once inside a computed property. While these can improve readability in complex cases, they can also add unnecessary layers of abstraction when the logic is simple. Let's look at how this pattern applies specifically to computed properties in Vue. ### Before Refactoring Here's how a Vue component might look before applying Computed Inlining: ```vue // src/components/OrderSummary.vue ``` The component works but has several issues: - Uses a watch when a computed would be more appropriate - Has multiple helper functions that are only used once - Splits related logic across different properties and functions - Creates unnecessary intermediate values ### After Refactoring with Computed Inlining Now let's apply Computed Inlining to simplify the code: ```vue // src/components/OrderSummary.vue ``` The refactored version: - Consolidates all pricing logic into a single computed property - Eliminates the need for a watch by using Vue's reactive system properly - Removes unnecessary helper functions and intermediate values - Makes the data flow more clear and direct - Reduces the number of reactive dependencies being tracked ## Best Practices - Apply Computed Inlining when the helper function is only used once - Use this pattern when the logic is simple enough to be understood inline - Add comments to clarify steps if the inline logic is non-trivial - Keep computed properties focused on a single responsibility, even after inlining - Consider keeping functions separate if they're reused or complex ## When to Use Computed Inlining - When the helper functions are only used by a single computed property - When performance is critical (eliminates function call overhead) - When the helper functions don't significantly improve readability - When you want to reduce the cognitive load of jumping between functions - When debugging and following the execution flow is important ## When to Avoid Computed Inlining - When the helper function is used in multiple places - When the logic is complex and the function name significantly improves clarity - When the function might need to be reused in the future - When testing the helper function independently is important ## Conclusion The Computed Inlining pattern in Vue is a practical application of Martin Fowler's Inline Function refactoring technique. It helps streamline your reactive code by: - Reducing unnecessary abstractions - Eliminating function call overhead - Making execution flow more direct and easier to follow - Keeping related logic together in one place While not appropriate for every situation, Computed Inlining is a valuable tool in your Vue refactoring toolkit, especially when optimizing components with many small helper functions. Try applying Computed Inlining in your next Vue component refactoring, and see how it can make your code both simpler and more efficient. ## References - [Martin Fowler's Inline Function Pattern](https://refactoring.com/catalog/inlineFunction.html) - [Vue Documentation on Computed Properties](https://vuejs.org/guide/essentials/computed.html) --- --- title: Are LLMs Creative? description: Exploring the fundamental nature of creativity in Large Language Models compared to human creativity, sparked by reflections on OpenAI's latest image model. tags: ['ai'] --- # Are LLMs Creative? ## Introduction After OpenAI released its impressive new image model, I started thinking more deeply about what creativity means. We often consider creativity as something magical and uniquely human. Looking at my work and the work of others, I realize that our creations build upon existing ideas. We remix, adapt, and build on what exists. In that sense, we share similarities with large language models (LLMs). Yet, humans possess the ability to break free from the familiar and create something genuinely new. That's the crucial difference. The constraints of training data limit LLMs. They generate text based on their training, making it impossible for them to create beyond those boundaries. Humans question the status quo. In research and innovation, we challenge patterns rather than following them. This exemplifies human creativity. Take Vincent van Gogh, for example. Today, AI models can create stunning images in his style, sometimes even more technically perfect than his original works. But van Gogh didn't learn his style from a dataset. He invented it. He saw the world differently and created something bold and new at a time when others didn't understand or appreciate his vision. An AI can now copy his style but couldn't have invented it. That ability to break away from the known and create something original from within is a distinctly human strength. ## How LLMs Work LLMs learn from text data sourced from books, sites, and other content. They learn language patterns and use them to generate new text. But they don't understand the meaning behind the words. They don't think, feel, or have experiences. Instead, they predict the next word in a sequence. ## Human Creativity vs. LLMs Humans create with purpose. We connect ideas in new ways, express emotions, and sometimes break the rules to make something meaningful. A poet may write to express grief. An inventor may design a tool to solve a real-world problem. There's intent behind our work. LLMs remix what they've seen. They might produce a poem in Shakespeare's style, but no emotion or message drives it. It's a sophisticated imitation of existing patterns. ## What LLMs Do Well LLMs demonstrate remarkable capabilities in: - Writing stories - Suggesting fresh ideas - Generating jokes or lyrics - Producing design concepts - Helping brainstorm solutions for coding or business problems People use LLMs as creative assistants. A writer might seek ideas when stuck. A developer might explore different coding approaches. LLMs accelerate the creative process and expand possibilities. ## The Limits of LLM Creativity Clear limitations exist. LLMs don't understand what they create. They can't determine if something is meaningful, original, or valuable. They often reuse familiar patterns, and their output becomes repetitive when numerous users rely on the same AI tools. Furthermore, LLMs can't transcend their training. They don't challenge ideas or invent new ways of thinking. Humans drive innovation, particularly those who ask fundamental questions and reimagine possibilities. ## So, Are LLMs Creative? It depends on how you define creativity. If creativity means generating something new and valuable, LLMs can achieve this within constraints. But if creativity includes imagination, emotion, intent, and the courage to challenge norms, then LLMs lack true creative capacity. They serve as powerful tools. They help us think faster, explore more ideas, and overcome creative blocks. But the deeper spark, the reason why we create, remains uniquely human. ## Conclusion LLMs impress with their capabilities. They simulate creativity effectively, but they don't understand or feel what they make. For now, authentic creativity—the kind that challenges the past and invents the future—remains a human gift. --- --- title: The Inline Vue Composables Refactoring pattern description: Learn how to apply Martin Fowler's Extract Function pattern to Vue components using inline composables, making your code cleaner and more maintainable. tags: ['vue', 'refactoring'] --- # The Inline Vue Composables Refactoring pattern ## TLDR Improve your Vue component organization by using inline composables - a technique inspired by Martin Fowler's Extract Function pattern. By grouping related logic into well-named functions within your components, you can make your code more readable and maintainable without the overhead of creating separate files. ## Introduction Vue 3 gives us powerful tools through the Composition API and ` ``` It works, but the logic is mixed together, and it's hard to tell what this component does without reading all the details. ### After Refactoring with Inline Composables Now let's apply Extract Function inside Vue. We'll group logic into focused composables: ```ts // src/components/FolderManager.vue ``` Now the logic is clean and separated. When someone reads this component, they can understand the responsibilities at a glance: ```ts const { showHiddenFolders } = useHiddenFolders() const { favoriteFolders, toggleFavorite } = useFavoriteFolders() ``` Each piece of logic has a descriptive name, with implementation details encapsulated in their own functions, following the Extract Function pattern. ## Best Practices - Use inline composables when your ` ``` ## Defining Stories for Testing Create "stories" to showcase different button configurations: ```typescript const buttonStories = [ { name: 'Primary Medium', props: { variant: 'primary', size: 'medium' }, slots: { default: 'Primary Button' }, }, { name: 'Secondary Medium', props: { variant: 'secondary', size: 'medium' }, slots: { default: 'Secondary Button' }, }, // and much more ... ] ``` Each story defines a name, props, and slot content. ## Rendering Stories for Screenshots Render all stories in one container to capture a comprehensive screenshot: ```typescript interface Story { name: string props: Record slots: Record } function renderStories(component: Component, stories: Story[]): HTMLElement { const container = document.createElement('div') container.style.display = 'flex' container.style.flexDirection = 'column' container.style.gap = '16px' container.style.padding = '20px' container.style.backgroundColor = '#ffffff' stories.forEach((story) => { const storyWrapper = document.createElement('div') const label = document.createElement('h3') label.textContent = story.name storyWrapper.appendChild(label) const { container: storyContainer } = render(component, { props: story.props, slots: story.slots, }) storyWrapper.appendChild(storyContainer) container.appendChild(storyWrapper) }) return container } ``` ## Writing the Visual Regression Test Write a test that renders the stories and captures a screenshot: ```typescript // [buttonStories and renderStories defined above] describe('BaseButton', () => { describe('visual regression', () => { it('should match all button variants snapshot', async () => { const container = renderStories(BaseButton, buttonStories) document.body.appendChild(container) const screenshot = await page.screenshot({ path: 'all-button-variants.png', }) // this assertion is acutaly not doing anything // but otherwise you would get a warning about the screenshot not being taken expect(screenshot).toBeTruthy() document.body.removeChild(container) }) }) }) ``` Use `render` from `vitest-browser-vue` to capture components as they appear in a real browser. Save this file with a `.browser.spec.ts` extension (e.g., `BaseButton.browser.spec.ts`) to match your browser test configuration. ## Beyond Screenshots: Automated Comparison Automate image comparison by encoding screenshots in base64 and comparing them against baseline snapshots: ```typescript // Helper function to take and compare screenshots async function takeAndCompareScreenshot(name: string, element: HTMLElement) { const screenshotDir = './__screenshots__' const snapshotDir = './__snapshots__' const screenshotPath = `${screenshotDir}/${name}.png` // Append element to body document.body.appendChild(element) // Take screenshot const screenshot = await page.screenshot({ path: screenshotPath, base64: true, }) // Compare base64 snapshot await expect(screenshot.base64).toMatchFileSnapshot(`${snapshotDir}/${name}.snap`) // Save PNG for reference await expect(screenshot.path).toBeTruthy() // Cleanup document.body.removeChild(element) } ``` Then update the test: ```typescript describe('BaseButton', () => { describe('visual regression', () => { it('should match all button variants snapshot', async () => { const container = renderStories(BaseButton, buttonStories) await expect( takeAndCompareScreenshot('all-button-variants', container) ).resolves.not.toThrow() }) }) }) ``` Vitest is discussing native screenshot comparisons in browser mode. Follow and contribute at [github.com/vitest-dev/vitest/discussions/690](https://github.com/vitest-dev/vitest/discussions/690). ```mermaid flowchart LR A[Render Component] --> B[Capture Screenshot] B --> C{Compare with Baseline} C -->|Match| D[Test Passes] C -->|Difference| E[Review Changes] E -->|Accept| F[Update Baseline] E -->|Reject| G[Fix Component] G --> A ``` ## Conclusion Vitest's experimental browser mode empowers developers to perform accurate visual regression testing of Vue components in real browser environments. While the current workflow requires manual review of screenshot comparisons, it establishes a foundation for more automated visual testing in the future. This approach also strengthens collaboration between developers and UI designers. Designers can review visual changes to components before production deployment by accessing the generated screenshots in the component library. For advanced visual testing capabilities, teams should explore dedicated tools like Playwright or Cypress that offer more features and maturity. Keep in mind to perform visual regression tests against your Base components. --- --- title: How to Test Vue Router Components with Testing Library and Vitest description: Learn how to test Vue Router components using Testing Library and Vitest. This guide covers real router integration, mocked router setups, and best practices for testing navigation, route guards, and dynamic components in Vue applications. tags: ['vue', 'testing', 'vue-router', 'vitest', 'testing-library'] --- # How to Test Vue Router Components with Testing Library and Vitest ## TLDR This guide shows you how to test Vue Router components using real router integration and isolated component testing with mocks. You'll learn to verify router-link interactions, programmatic navigation, and navigation guard handling. ## Introduction Modern Vue applications need thorough testing to ensure reliable navigation and component performance. We'll cover testing strategies using Testing Library and Vitest to simulate real-world scenarios through router integration and component isolation. ## Vue Router Testing Techniques with Testing Library and Vitest Let's explore how to write effective tests for Vue Router components using both real router instances and mocks. ## Testing Vue Router Navigation Components ### Navigation Component Example ```vue ``` ### Real Router Integration Testing Test complete routing behavior with a real router instance: ```typescript describe('NavigationMenu', () => { it('should navigate using router links', async () => { const router = createRouter({ history: createWebHistory(), routes: [ { path: '/dashboard', component: { template: 'Dashboard' } }, { path: '/settings', component: { template: 'Settings' } }, { path: '/profile', component: { template: 'Profile' } }, { path: '/', component: { template: 'Home' } }, ], }) render(NavigationMenu, { global: { plugins: [router], }, }) const user = userEvent.setup() expect(router.currentRoute.value.path).toBe('/') await router.isReady() await user.click(screen.getByText('Dashboard')) expect(router.currentRoute.value.path).toBe('/dashboard') await user.click(screen.getByText('Profile')) expect(router.currentRoute.value.path).toBe('/profile') }) }) ``` ### Mocked Router Testing Test components in isolation with router mocks: ```typescript const mockPush = vi.fn() vi.mock('vue-router', () => ({ useRouter: vi.fn(), })) describe('NavigationMenu with mocked router', () => { it('should handle navigation with mocked router', async () => { const mockRouter = { push: mockPush, currentRoute: { value: { path: '/' } }, } as unknown as Router vi.mocked(useRouter).mockImplementation(() => mockRouter) const user = userEvent.setup() render(NavigationMenu) await user.click(screen.getByText('Profile')) expect(mockPush).toHaveBeenCalledWith('/profile') }) }) ``` ### RouterLink Stub for Isolated Testing Create a RouterLink stub to test navigation without router-link behavior: ```ts // test-utils.ts export const RouterLinkStub: Component = { name: 'RouterLinkStub', props: { to: { type: [String, Object], required: true, }, tag: { type: String, default: 'a', }, exact: Boolean, exactPath: Boolean, append: Boolean, replace: Boolean, activeClass: String, exactActiveClass: String, exactPathActiveClass: String, event: { type: [String, Array], default: 'click', }, }, setup(props) { const router = useRouter() const navigate = () => { router.push(props.to) } return { navigate } }, render() { return h( this.tag, { onClick: () => this.navigate(), }, this.$slots.default?.(), ) }, } ``` Use the RouterLinkStub in tests: ```ts const mockPush = vi.fn() vi.mock('vue-router', () => ({ useRouter: vi.fn(), })) describe('NavigationMenu with mocked router', () => { it('should handle navigation with mocked router', async () => { const mockRouter = { push: mockPush, currentRoute: { value: { path: '/' } }, } as unknown as Router vi.mocked(useRouter).mockImplementation(() => mockRouter) const user = userEvent.setup() render(NavigationMenu, { global: { stubs: { RouterLink: RouterLinkStub, }, }, }) await user.click(screen.getByText('Dashboard')) expect(mockPush).toHaveBeenCalledWith('/dashboard') }) }) ``` ### Testing Navigation Guards Test navigation guards by rendering the component within a route context: ```vue ``` Test the navigation guard: ```ts const routes = [ { path: '/', component: RouteLeaveGuardDemo }, { path: '/about', component: { template: '
    About
    ' } }, ] const router = createRouter({ history: createWebHistory(), routes, }) const App = { template: '' } describe('RouteLeaveGuardDemo', () => { beforeEach(async () => { vi.clearAllMocks() window.confirm = vi.fn() await router.push('/') await router.isReady() }) it('should prompt when guard is triggered and user confirms', async () => { // Set window.confirm to simulate a user confirming the prompt window.confirm = vi.fn(() => true) // Render the component within a router context render(App, { global: { plugins: [router], }, }) const user = userEvent.setup() // Find the 'About' link and simulate a user click const aboutLink = screen.getByRole('link', { name: /About/i }) await user.click(aboutLink) // Assert that the confirm dialog was shown with the correct message expect(window.confirm).toHaveBeenCalledWith('Do you really want to leave this page?') // Verify that the navigation was allowed and the route changed to '/about' expect(router.currentRoute.value.path).toBe('/about') }) }) ``` ### Reusable Router Test Helper Create a helper function to simplify router setup: ```typescript // test-utils.ts // path of the definition of your routes interface RenderWithRouterOptions extends Omit, 'global'> { initialRoute?: string routerOptions?: { routes?: typeof routes history?: ReturnType } } export function renderWithRouter(Component: any, options: RenderWithRouterOptions = {}) { const { initialRoute = '/', routerOptions = {}, ...renderOptions } = options const router = createRouter({ history: createWebHistory(), // Use provided routes or import from your router file routes: routerOptions.routes || routes, }) router.push(initialRoute) return { // Return everything from regular render, plus the router instance ...render(Component, { global: { plugins: [router], }, ...renderOptions, }), router, } } ``` Use the helper in tests: ```typescript describe('NavigationMenu', () => { it('should navigate using router links', async () => { const { router } = renderWithRouter(NavigationMenu, { initialRoute: '/', }) await router.isReady() const user = userEvent.setup() await user.click(screen.getByText('Dashboard')) expect(router.currentRoute.value.path).toBe('/dashboard') }) }) ``` ### Conclusion: Best Practices for Vue Router Component Testing When we test components that rely on the router, we need to consider whether we want to test the functionality in the most realistic use case or in isolation. In my humble opinion, the more you mock a test, the worse it will get. My personal advice would be to aim to use the real router instead of mocking it. Sometimes, there are exceptions, so keep that in mind. Also, you can help yourself by focusing on components that don't rely on router functionality. Reserve router logic for view/page components. While keeping our components simple, we will never have the problem of mocking the router in the first place. --- --- title: How to Use AI for Effective Diagram Creation: A Guide to ChatGPT and Mermaid description: Learn how to leverage ChatGPT and Mermaid to create effective diagrams for technical documentation and communication. tags: ['ai', 'productivity'] --- # How to Use AI for Effective Diagram Creation: A Guide to ChatGPT and Mermaid ## TLDR Learn how to combine ChatGPT and Mermaid to quickly create professional diagrams for technical documentation. This approach eliminates the complexity of traditional diagramming tools while maintaining high-quality output. ## Introduction Mermaid is a markdown-like script language that generates diagrams from text descriptions. When combined with ChatGPT, it becomes a powerful tool for creating technical diagrams quickly and efficiently. ## Key Diagram Types ### Flowcharts Perfect for visualizing processes: ```plaintext flowchart LR A[Customer selects products] --> B[Customer reviews order] B --> C{Payment Successful?} C -->|Yes| D[Generate Invoice] D --> E[Dispatch goods] C -->|No| F[Redirect to Payment] ``` ```mermaid flowchart LR A[Customer selects products] --> B[Customer reviews order] B --> C{Payment Successful?} C -->|Yes| D[Generate Invoice] D --> E[Dispatch goods] C -->|No| F[Redirect to Payment] ``` ### Sequence Diagrams Ideal for system interactions: ```plaintext sequenceDiagram participant Client participant Server Client->>Server: Request (GET /resource) Server-->>Client: Response (200 OK) ``` ```mermaid sequenceDiagram participant Client participant Server Client->>Server: Request (GET /resource) Server-->>Client: Response (200 OK) ``` ## Using ChatGPT with Mermaid 1. Ask ChatGPT to explain your concept 2. Request a Mermaid diagram representation 3. Iterate on the diagram with follow-up questions Example prompt: "Create a Mermaid sequence diagram showing how Nuxt.js performs server-side rendering" ```plaintext sequenceDiagram participant Client as Client Browser participant Nuxt as Nuxt.js Server participant Vue as Vue.js Application participant API as Backend API Client->>Nuxt: Initial Request Nuxt->>Vue: SSR Starts Vue->>API: API Calls (if any) API-->>Vue: API Responses Vue->>Nuxt: Rendered HTML Nuxt-->>Client: HTML Content ``` ```mermaid sequenceDiagram participant Client as Client Browser participant Nuxt as Nuxt.js Server participant Vue as Vue.js Application participant API as Backend API Client->>Nuxt: Initial Request Nuxt->>Vue: SSR Starts Vue->>API: API Calls (if any) API-->>Vue: API Responses Vue->>Nuxt: Rendered HTML Nuxt-->>Client: HTML Content ``` ## Quick Setup Guide ### Online Editor Use [Mermaid Live Editor](https://mermaid.live/) for quick prototyping. ### VS Code Integration 1. Install "Markdown Preview Mermaid Support" extension 2. Create `.md` file with Mermaid code blocks 3. Preview with built-in markdown viewer ### Web Integration ```html
    graph TD A-->B
    ``` ## Conclusion The combination of ChatGPT and Mermaid streamlines technical diagramming, making it accessible and efficient. Try it in your next documentation project to save time while creating professional diagrams. --- --- title: Building a Pinia Plugin for Cross-Tab State Syncing description: Learn how to create a Pinia plugin that synchronizes state across browser tabs using the BroadcastChannel API and Vue 3's Script Setup syntax. tags: ['vue', 'pinia'] --- # Building a Pinia Plugin for Cross-Tab State Syncing ## TLDR Create a Pinia plugin that enables state synchronization across browser tabs using the BroadcastChannel API. The plugin allows you to mark specific stores for cross-tab syncing and handles state updates automatically with timestamp-based conflict resolution. ## Introduction In modern web applications, users often work with multiple browser tabs open. When using Pinia for state management, we sometimes need to ensure that state changes in one tab are reflected across all open instances of our application. This post will guide you through creating a plugin that adds cross-tab state synchronization to your Pinia stores. ## Understanding Pinia Plugins A Pinia plugin is a function that extends the functionality of Pinia stores. Plugins are powerful tools that help: - Reduce code duplication - Add reusable functionality across stores - Keep store definitions clean and focused - Implement cross-cutting concerns ## Cross-Tab Communication with BroadcastChannel The BroadcastChannel API provides a simple way to send messages between different browser contexts (tabs, windows, or iframes) of the same origin. It's perfect for our use case of synchronizing state across tabs. Key features of BroadcastChannel: - Built-in browser API - Same-origin security model - Simple pub/sub messaging pattern - No need for external dependencies ### How BroadcastChannel Works The BroadcastChannel API operates on a simple principle: any browsing context (window, tab, iframe, or worker) can join a channel by creating a `BroadcastChannel` object with the same channel name. Once joined: 1. Messages are sent using the `postMessage()` method 2. Messages are received through the `onmessage` event handler 3. Contexts can leave the channel using the `close()` method ## Implementing the Plugin ### Store Configuration To use our plugin, stores need to opt-in to state sharing through configuration: ```ts export const useCounterStore = defineStore( 'counter', () => { const count = ref(0) const doubleCount = computed(() => count.value * 2) function increment() { count.value++ } return { count, doubleCount, increment } }, { share: { enable: true, initialize: true, }, }, ) ``` The `share` option enables cross-tab synchronization and controls whether the store should initialize its state from other tabs. ### Plugin Registration `main.ts` Register the plugin when creating your Pinia instance: ```ts const pinia = createPinia() pinia.use(PiniaSharedState) ``` ### Plugin Implementation `plugin/plugin.ts` Here's our complete plugin implementation with TypeScript support: ```ts type Serializer = { serialize: (value: T) => string deserialize: (value: string) => T } interface BroadcastMessage { type: 'STATE_UPDATE' | 'SYNC_REQUEST' timestamp?: number state?: string } type PluginOptions = { enable?: boolean initialize?: boolean serializer?: Serializer } export interface StoreOptions extends DefineStoreOptions { share?: PluginOptions } // Add type extension for Pinia declare module 'pinia' { // eslint-disable-next-line @typescript-eslint/no-unused-vars export interface DefineStoreOptionsBase { share?: PluginOptions } } export function PiniaSharedState({ enable = false, initialize = false, serializer = { serialize: JSON.stringify, deserialize: JSON.parse, }, }: PluginOptions = {}) { return ({ store, options }: PiniaPluginContext) => { if (!(options.share?.enable ?? enable)) return const channel = new BroadcastChannel(store.$id) let timestamp = 0 let externalUpdate = false // Initial state sync if (options.share?.initialize ?? initialize) { channel.postMessage({ type: 'SYNC_REQUEST' }) } // State change listener store.$subscribe((_mutation, state) => { if (externalUpdate) return timestamp = Date.now() channel.postMessage({ type: 'STATE_UPDATE', timestamp, state: serializer.serialize(state as T), }) }) // Message handler channel.onmessage = (event: MessageEvent) => { const data = event.data if ( data.type === 'STATE_UPDATE' && data.timestamp && data.timestamp > timestamp && data.state ) { externalUpdate = true timestamp = data.timestamp store.$patch(serializer.deserialize(data.state)) externalUpdate = false } if (data.type === 'SYNC_REQUEST') { channel.postMessage({ type: 'STATE_UPDATE', timestamp, state: serializer.serialize(store.$state as T), }) } } } } ``` The plugin works by: 1. Creating a BroadcastChannel for each store 2. Subscribing to store changes and broadcasting updates 3. Handling incoming messages from other tabs 4. Using timestamps to prevent update cycles 5. Supporting custom serialization for complex state ### Communication Flow Diagram ```mermaid flowchart LR A[User interacts with store in Tab 1] --> B[Store state changes] B --> C[Plugin detects change] C --> D[BroadcastChannel posts STATE_UPDATE] D --> E[Other tabs receive STATE_UPDATE] E --> F[Plugin patches store state in Tab 2] ``` ## Using the Synchronized Store Components can use the synchronized store just like any other Pinia store: ```ts const counterStore = useCounterStore() // State changes will automatically sync across tabs counterStore.increment() ``` ## Conclusion With this Pinia plugin, we've added cross-tab state synchronization with minimal configuration. The solution is lightweight, type-safe, and leverages the built-in BroadcastChannel API. This pattern is particularly useful for applications where users frequently work across multiple tabs and need a consistent state experience. Remember to consider the following when using this plugin: - Only enable sharing for stores that truly need it - Be mindful of performance with large state objects - Consider custom serialization for complex data structures - Test thoroughly across different browser scenarios ## Future Optimization: Web Workers For applications with heavy cross-tab communication or complex state transformations, consider offloading the BroadcastChannel handling to a Web Worker. This approach can improve performance by: - Moving message processing off the main thread - Handling complex state transformations without blocking UI - Reducing main thread load when syncing large state objects - Buffering and batching state updates for better performance This is particularly beneficial when: - Your application has many tabs open simultaneously - State updates are frequent or computationally intensive - You need to perform validation or transformation on synced data - The application handles large datasets that need to be synced You can find the complete code for this plugin in the [GitHub repository](https://github.com/alexanderop/pluginPiniaTabs). It also has examples of how to use it with Web Workers. --- --- title: The Browser That Speaks 200 Languages: Building an AI Translator Without APIs description: Learn how to build a browser-based translator that works offline and handles 200 languages using Vue and Transformers.js tags: ['vue', 'ai'] --- # The Browser That Speaks 200 Languages: Building an AI Translator Without APIs ## Introduction Most AI translation tools rely on external APIs. This means sending data to servers and paying for each request. But what if you could run translations directly in your browser? This guide shows you how to build a free, offline translator that handles 200 languages using Vue and Transformers.js. ## The Tools - Vue 3 for the interface - Transformers.js to run AI models locally - Web Workers to handle heavy processing - NLLB-200, Meta's translation model ```mermaid --- title: Architecture Overview --- graph LR Frontend[Vue Frontend] Worker[Web Worker] TJS[Transformers.js] Model[NLLB-200 Model] Frontend -->|"Text"| Worker Worker -->|"Initialize"| TJS TJS -->|"Load"| Model Model -->|"Results"| TJS TJS -->|"Stream"| Worker Worker -->|"Translation"| Frontend classDef default fill:#344060,stroke:#AB4B99,color:#EAEDF3 classDef accent fill:#8A337B,stroke:#AB4B99,color:#EAEDF3 class TJS,Model accent ``` ## Building the Translator ![AI Translator](../../assets/images/vue-ai-translate.png) ### 1. Set Up Your Project Create a new Vue project with TypeScript: ```bash npm create vite@latest vue-translator -- --template vue-ts cd vue-translator npm install npm install @huggingface/transformers ``` ### 2. Create the Translation Worker The translation happens in a background process. Create `src/worker/translation.worker.ts`: ```typescript // Singleton pattern for the translation pipeline class MyTranslationPipeline { static task: PipelineType = 'translation'; // We use the distilled model for faster loading and inference static model = 'Xenova/nllb-200-distilled-600M'; static instance: TranslationPipeline | null = null; static async getInstance(progress_callback?: ProgressCallback) { if (!this.instance) { this.instance = await pipeline(this.task, this.model, { progress_callback }) as TranslationPipeline; } return this.instance; } } // Type definitions for worker messages interface TranslationRequest { text: string; src_lang: string; tgt_lang: string; } // Worker message handler self.addEventListener('message', async (event: MessageEvent) => { try { // Initialize the translation pipeline with progress tracking const translator = await MyTranslationPipeline.getInstance(x => { self.postMessage(x); }); // Configure streaming for real-time translation updates const streamer = new TextStreamer(translator.tokenizer, { skip_prompt: true, skip_special_tokens: true, callback_function: (text: string) => { self.postMessage({ status: 'update', output: text }); } }); // Perform the translation const output = await translator(event.data.text, { tgt_lang: event.data.tgt_lang, src_lang: event.data.src_lang, streamer, }); // Send the final result self.postMessage({ status: 'complete', output, }); } catch (error) { self.postMessage({ status: 'error', error: error instanceof Error ? error.message : 'An unknown error occurred' }); } }); ``` ### 3. Build the Interface Create a clean interface with two main components: #### Language Selector (`src/components/LanguageSelector.vue`) ```vue ``` #### Progress Bar (`src/components/ProgressBar.vue`) ```vue ``` ### 4. Put It All Together In your main app file: ```vue