Skip to content

Commit f6e8a47

Browse files
author
rishabh
committed
Move blog to fumadocs
0 parents  commit f6e8a47

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+5027
-0
lines changed

.gitignore

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
.next
2+
.vercel
3+
**/node_modules/**
4+
.DS_Store
5+
.map.ts

README.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
# blog
2+
3+
This is a Next.js application generated with
4+
[Create Fumadocs](https://github.com/fuma-nama/fumadocs).
5+
6+
Install dependencies:
7+
8+
```bash
9+
bun install
10+
```
11+
12+
Run development server:
13+
14+
```bash
15+
bun dev
16+
```
17+
18+
Open http://localhost:3000 with your browser to see the result.
19+
20+
Dev commands:
21+
22+
```bash
23+
#format
24+
bun run format
25+
#lint
26+
bun run lint
27+
#lint(fix)
28+
bun run lint:fix
29+
#format, lint and import sort
30+
bun run check:fix
31+
```
32+
33+
## Learn More
34+
35+
To learn more about Next.js and Fumadocs, take a look at the following
36+
resources:
37+
38+
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js
39+
features and API.
40+
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
41+
- [Fumadocs](https://fumadocs.vercel.app) - learn about Fumadocs

content/hackernews-rag.mdx

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
---
2+
title: "RAG on Hacker News comments to generate a research summary"
3+
description: "Learn how to search Hacker News comments for a topic, extract sentiment, and generate a research summary in 34 lines of Substrate code. Runs dozens of LLM calls in parallel and streams markdown. Built in 15 minutes, easy to remix."
4+
date: 2024-07-15
5+
image: "/hnrag.png"
6+
---
7+
8+
<div class="hero-image">
9+
<img width={1020} height={510} src="/hnrag.png" alt="RAG on Hacker News comments to generate a research summary" />
10+
</div>
11+
12+
In this post, we'll show you how to search Hacker News comments for a topic, extract sentiment, and generate a research summary in 34 lines of code using Substrate.
13+
14+
- [Read on Twitter](https://x.com/vprtwn/status/1812844236401762513)
15+
- [Read on LinkedIn](https://www.linkedin.com/pulse/rag-hacker-news-comments-34-lines-code-substratelabs-pouje)
16+
17+
<br/>
18+
19+
This concise RAG implementation runs dozens of LLM calls in parallel and streams the markdown in no time. It's easy to remix, and genuinely useful. Internally, we've already written several scripts like this for Reddit, LinkedIn, and Twitter, and set up alerts to Slack.
20+
21+
<iframe width="100%" height="600px" src="https://www.val.town/embed/substrate/hackerNewsRAG" title="Val Town" frameborder="0" allow="web-share" allowfullscreen></iframe>
22+
23+
<br/>
24+
25+
![hnrag](/hnrag.gif)
26+
27+
First, we search HackerNews comments using the [Algolia HN Search API](https://hn.algolia.com/api).
28+
29+
```typescript
30+
const searchResults = await hnSearch({
31+
query: query,
32+
numericFilters: `created_at_i>${Math.floor(Date.now() / 1000) - 60 * 60 * 24 * 7 * 4}`,
33+
tags: "comment",
34+
});
35+
```
36+
37+
<br/>
38+
39+
Next, we use ComputeJSON to extract summary, sentiment, and other metadata from each comment. Structured JSON generation is ergonomic, reliable and blazing-fast on Substrate compared to other providers. This is critical for multi-step workflows.
40+
41+
```typescript
42+
let summaries = [];
43+
for (const hit of searchResults.hits) {
44+
summaries.push(
45+
new ComputeJSON({
46+
prompt: `Summarize this comment and how it relates to the topic: ${query}
47+
Use "negative" sentiment for posts about API, abstraction, documentation, tutorial, general quality, slowness, or performance issues.
48+
COMMENT: ${JSON.stringify(hit)}`,
49+
json_schema: zodToJsonSchema(commentInfo),
50+
}),
51+
);
52+
}
53+
```
54+
55+
<br/>
56+
57+
Finally, we use ComputeText to generate a markdown summary of all the extracted JSON, and stream the results. Streaming on Substrate is really cool. You can of course stream the response of an individual LLM. But you can also stream the incremental steps of your workflow.
58+
59+
```typescript
60+
const markdown = new ComputeText({
61+
prompt: sb.concat(
62+
`Below is a list of summarized comments about ${query} on Hacker News.
63+
Generate concise markdown summarizing the results.
64+
Summarize the contents of the comment and the sentiment about ${query}.
65+
Categorize results under sentiment headers.
66+
Order from most negative to least negative within each category.
67+
Add a link to the original story URL in this format: [<story title>](https://news.ycombinator.com/item?id=<objectID>)
68+
Filter out posts that do not seem to be about ${query}.
69+
RESULTS:\n`,
70+
...summaries.map((s) => sb.jq(s.future.json_object, "@json")),
71+
),
72+
model: "Llama3Instruct70B",
73+
});
74+
const stream = await substrate.stream(markdown);
75+
```
76+
77+
<br/>
78+
79+
The code we wrote was really simple. Implicitly, we were creating the graph below. But we didn't have to think about the graph at all! With Substrate, by simply relating tasks to each other, we get automatic parallelization of dozens of LLM calls for free, and 0 roundtrips.
80+
81+
![graph](/hnrag-graph.png)
82+
83+
Great power with great simplicity.
84+
85+
View the full source, fork, and remix here: https://www.val.town/v/substrate/hackerNewsRAG
86+
87+
- [Read on Twitter](https://x.com/vprtwn/status/1812844236401762513)
88+
- [Read on LinkedIn](https://www.linkedin.com/pulse/rag-hacker-news-comments-34-lines-code-substratelabs-pouje)

content/introducing.mdx

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
---
2+
title: "Introducing Substrate"
3+
description: "Introducing Substrate, the API for modular AI"
4+
date: 06-20-2024
5+
image: "/launch-image.png"
6+
---
7+
8+
<div class="hero-image">
9+
<img width={1020} height={510} src="/launch-image.png" alt="Introducing Substrate" />
10+
</div>
11+
12+
Today, we're launching [Substrate](https://substrate.run). We're also announcing our $8M Series Seed led by [Lightspeed](https://lsvp.com/stories/substrate-building-compound-ai-systems/).
13+
14+
We believe the most robust and productive integrations of AI are when many inference runs are used in coordination with each other in a well-defined logical structure. This leads to more capable, more reliable, and more interpretable AI systems.
15+
16+
Most people building with AI already know this; So-called "agentic" processes are becoming the norm, along with using LLMs for structured JSON generation in more constrained logical flows. But unless you work for Google, the main barrier to realizing multi-step AI workloads in your application is an infrastructure one. Most developers are left either creating an unwieldy mess of chained API calls to multiple providers which requires slow round-trips and expensive one-off calls, or they are attempting to deploy their own infrastructure which—without massive investment—tends to result in systems that are resource inefficient and slow.
17+
18+
We took a hard look at this state of affairs, and recognized how much it is stifling progress.
19+
20+
Building large multi-step AI workloads requires sophisticated high-performance tooling and infrastructure. Nobody wants to deal with more tooling and infrastructure… but everyone would benefit from simple, intuitive interfaces that abstract away a powerful system underneath if they are flexible enough to work in any domain.
21+
22+
No tooling, no infrastructure – just elegant abstractions.
23+
24+
Substrate is the first inference API optimized specifically for multi-step AI workloads. With Substrate, you connect nodes from a [curated library](https://docs.substrate.run/overview/api) that includes optimized ML models, built-in file and vector storage, a code interpreter, and logical control flow. By simply connecting nodes, you describe a graph program, which Substrate then analyzes and runs as fast as possible. Entire graphs of many nodes will often run on a single machine, with auto batching and microsecond communication between tasks.
25+
26+
We've been working on Substrate privately for nearly a year. We've battle-tested the product with great customers like [Substack](/substack), and we're finally ready to open access to everyone.
27+
28+
[Let us know](https://join.slack.com/t/substratecommunity/shared_invite/zt-2jd8w6b7n-b0qE5QWV7rsClGglHeu_rA) what you think. We can't wait to see what you build.

content/substack.mdx

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
title: "Substack runs modular AI workloads on Substrate"
3+
description: "By choosing Substrate, Substack develops multi-inference AI workloads with greater speed and flexibility than ever before."
4+
date: 06-21-2024
5+
image: "/substack-substrate.png"
6+
---
7+
8+
<div class="hero-image">
9+
<img width={1020} height={510} src="/substack-substrate.png" alt="Substack runs modular AI workloads on Substrate" />
10+
</div>
11+
12+
[Substack](https://substack.com) is a large online publishing platform that enables writers to engage directly with their readers, with over 17k active writers.
13+
14+
Substack employs ML for various purposes, including image generation, content categorization, content recommendation, semantic search, and audio transcription. For all of these use cases, Substack has moved their inference workloads to Substrate.
15+
16+
Initially, Substack tried using other tools to power generative AI features embedded in their publishing flow. But the result was slow and expensive, and in order to roll these features out to all writers on their platform, they had to find another solution. They knew that if they could wave a magic wand, their ideal solution would be a set of simple APIs they could call, without any additional infrastructure for their engineering team to manage. But speed, cost, reliability, and extensibility were critical, and no providers fit the bill. Substrate offered performant inference for all of the models they wanted to use, behind a polished API.
17+
18+
Substack was also exploring ways to integrate LLMs, semantic vectors, and vector databases into their internal systems to categorize and recommend content. These tasks required using an ensemble of ML models in coordination with a vector database. When using other providers, Substack found that making many parallel or chained API requests in a single workflow was prohibitively slow, and often triggered rate limits. They considered taking on running the infrastructure themselves – which their engineering team would have been capable of – but they knew this would come at a cost to progress on their core product.
19+
20+
Because Substack already used Substrate for performant inference running individual models, using Substrate for multi-model pipelines and integrated vector retrieval was an obvious choice. Using the Substrate TypeScript SDK, Substack started composing LLM, VLM, transcription, embedding, and retrieval tasks into graph workflows. Today, Substack runs many multi-inference workloads (some with dozens of nodes) at scale across their entire content catalog.
21+
22+
By choosing Substrate, Substack has been able to develop large-scale, modular, multi-inference AI workflows with greater speed and flexibility than ever before.

next-env.d.ts

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
/// <reference types="next" />
2+
/// <reference types="next/image-types/global" />
3+
4+
// NOTE: This file should not be edited
5+
// see https://nextjs.org/docs/basic-features/typescript for more information.

next-sitemap.config.js

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
/** @type {import('next-sitemap').IConfig} */
2+
module.exports = {
3+
siteUrl: process.env.NEXT_PUBLIC_SITE_URL || 'https://www.substrate.run/blog',
4+
generateRobotsTxt: true,
5+
};

next.config.mjs

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
import createMDX from 'fumadocs-mdx/config';
2+
import rehypeKatex from 'rehype-katex';
3+
import remarkMath from 'remark-math';
4+
5+
const withMDX = createMDX({
6+
mdxOptions: {
7+
lastModifiedTime: 'git',
8+
remarkPlugins: [remarkMath],
9+
rehypePlugins: (v) => [rehypeKatex, ...v],
10+
},
11+
});
12+
13+
/** @type {import('next').NextConfig} */
14+
const config = {
15+
reactStrictMode: true,
16+
};
17+
18+
export default withMDX(config);

package.json

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
{
2+
"name": "blog",
3+
"version": "0.0.0",
4+
"private": true,
5+
"scripts": {
6+
"build": "next build",
7+
"postbuild": "next-sitemap",
8+
"dev": "next dev",
9+
"start": "next start",
10+
"format": "biome format --write .",
11+
"lint": "biome lint .",
12+
"lint:fix": "biome lint --write --unsafe .",
13+
"check": "biome check .",
14+
"check:fix": "biome check --write --unsafe ."
15+
},
16+
"dependencies": {
17+
"@next/third-parties": "^14.2.4",
18+
"@radix-ui/react-dialog": "^1.1.1",
19+
"class-variance-authority": "^0.7.0",
20+
"clsx": "^2.0.0",
21+
"fumadocs-core": "^12.5.6",
22+
"fumadocs-mdx": "^8.2.33",
23+
"fumadocs-ui": "^12.3.4",
24+
"katex": "^0.16.10",
25+
"lucide-react": "^0.399.0",
26+
"next": "^14.2.4",
27+
"next-sitemap": "^4.2.3",
28+
"react": "^18.3.1",
29+
"react-dom": "^18.3.1",
30+
"rehype-katex": "^7.0.0",
31+
"remark-math": "^6.0.0",
32+
"tailwind-merge": "^1.14.0",
33+
"zod": "^3.23.8"
34+
},
35+
"devDependencies": {
36+
"@types/mdx": "^2.0.11",
37+
"@types/node": "^20.14.9",
38+
"@types/react": "^18.3.3",
39+
"@types/react-dom": "^18.2.21",
40+
"autoprefixer": "^10.4.18",
41+
"postcss": "^8.4.39",
42+
"tailwindcss": "3.4.4",
43+
"typescript": "^5.5.2"
44+
}
45+
}

0 commit comments

Comments
 (0)