Building AI Applications with LangChainGo

Elliot Forbes · Mar 7, 2026 · 8 min read

If you’ve been curious about building AI-powered applications but don’t want to leave the comfort of Go, you’re in luck. LangChainGo is a fantastic library that brings the power of LangChain to the Go ecosystem, making it straightforward to build intelligent applications that interact with large language models.

In this tutorial, we’re going to explore LangChainGo from the ground up. We’ll start with the basics, set up a project, connect to a local LLM, and build our way up to creating a stateful chatbot. Let’s dive in!

What is LangChainGo?

LangChainGo is a Go port of the popular Python LangChain library. It provides a framework for developing applications powered by large language models (LLMs). But what makes it special?

The core idea behind LangChain (in any language) is to abstract away the complexity of working with LLMs. Instead of dealing directly with API calls and token management, you work with high-level concepts like prompts, chains, and memory. This makes your code cleaner, more maintainable, and easier to reason about.

How does it differ from Python’s LangChain? The Python version has more integrations and has been around longer, so it’s more mature. However, LangChainGo offers something equally valuable: if you’re already working in Go, you get native performance and the ability to keep your entire stack in one language. Plus, Go’s simplicity and concurrency model make it perfect for building scalable AI applications.

Setting Up Your Go Project

Let’s get our hands dirty and set up a new Go project. First, create a new directory and initialize your Go module:

mkdir langchain-go-tutorial
cd langchain-go-tutorial
go mod init github.com/yourname/langchain-go-tutorial

Now we need to pull in LangChainGo and the other dependencies we’ll need. The core package is typically found in the langchaingo module. Let’s add it:

go get github.com/tmc/langchaingo

We’ll also need a specific integration for our LLM provider. For this tutorial, we’re going to use Ollama, which lets us run LLMs locally without relying on expensive cloud APIs.

go get github.com/tmc/langchaingo/llms/ollama

Perfect! Your go.mod file should now have the necessary dependencies. If you want to verify everything is set up correctly, run:

go mod tidy

Connecting to Ollama as a Local LLM Provider

Before we start writing code, let’s set up Ollama. Ollama is an amazing tool that lets you run LLMs on your local machine. It’s free and gives you complete control over your data.

First, download and install Ollama from ollama.ai. Once installed, open a terminal and pull a model. We’ll use Mistral, which is lightweight and performs well:

ollama pull mistral

Now start the Ollama server:

ollama serve

By default, Ollama runs on http://localhost:11434. Keep this terminal open while you’re developing.

Your First Go Program: Simple Text Generation

Let’s write a simple program that generates text using Ollama. Create a file called main.go:

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/tmc/langchaingo/llms"
	"github.com/tmc/langchaingo/llms/ollama"
)

func main() {
	// Initialize the Ollama LLM
	llm, err := ollama.New(
		ollama.WithModel("mistral"),
		ollama.WithServerURL("http://localhost:11434"),
	)
	if err != nil {
		log.Fatal(err)
	}

	// Create a simple text generation request
	ctx := context.Background()
	response, err := llm.GenerateContent(ctx, []llms.MessageContent{
		{
			Part: llms.TextContent{
				Text: "What is the capital of France? Answer in one sentence.",
			},
			Role: "user",
		},
	})
	if err != nil {
		log.Fatal(err)
	}

	// Print the response
	fmt.Println("Response:")
	for _, choice := range response.Choices {
		fmt.Println(choice.Content)
	}
}

Run this with:

go run main.go

You should see the model respond with the capital of France. Congratulations! You’ve just built your first LangChainGo application.

Working with Prompts and Prompt Templates

Hardcoding prompts isn’t very flexible. Let’s use LangChainGo’s prompt templates to make our code more reusable. Prompt templates are strings with variables that you can fill in dynamically.

Create a new file called prompts.go:

package main

import (
	"github.com/tmc/langchaingo/prompts"
)

func getCapitalPrompt() prompts.PromptTemplate {
	template := "What is the capital of {country}? Answer in one sentence."

	p, err := prompts.NewPromptTemplate(
		prompts.WithTemplate(template),
		prompts.WithInputVariables([]string{"country"}),
	)
	if err != nil {
		panic(err)
	}

	return p
}

func getSummaryPrompt() prompts.PromptTemplate {
	template := "Summarize the following text in 2-3 sentences:\n\n{text}"

	p, err := prompts.NewPromptTemplate(
		prompts.WithTemplate(template),
		prompts.WithInputVariables([]string{"text"}),
	)
	if err != nil {
		panic(err)
	}

	return p
}

Now let’s update our main.go to use prompts:

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/tmc/langchaingo/llms"
	"github.com/tmc/langchaingo/llms/ollama"
	"github.com/tmc/langchaingo/prompts"
)

func main() {
	llm, err := ollama.New(
		ollama.WithModel("mistral"),
		ollama.WithServerURL("http://localhost:11434"),
	)
	if err != nil {
		log.Fatal(err)
	}

	ctx := context.Background()

	// Use a prompt template
	template := "What is the capital of {country}? Answer in one sentence."
	prompt, err := prompts.NewPromptTemplate(
		prompts.WithTemplate(template),
		prompts.WithInputVariables([]string{"country"}),
	)
	if err != nil {
		log.Fatal(err)
	}

	// Format the prompt with actual values
	formattedPrompt, err := prompt.FormatPrompt(map[string]interface{}{
		"country": "Japan",
	})
	if err != nil {
		log.Fatal(err)
	}

	// Generate content using the formatted prompt
	response, err := llm.GenerateContent(ctx, []llms.MessageContent{
		{
			Part: llms.TextContent{
				Text: formattedPrompt.String(),
			},
			Role: "user",
		},
	})
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Response:")
	for _, choice := range response.Choices {
		fmt.Println(choice.Content)
	}
}

This approach is much more maintainable. You can now create different prompts for different tasks and reuse them throughout your application.

Chaining Operations Together

The real power of LangChain comes from chains. A chain combines a prompt template, an LLM, and an output parser to create a reusable workflow. Let’s build a simple chain:

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/tmc/langchaingo/chains"
	"github.com/tmc/langchaingo/llms"
	"github.com/tmc/langchaingo/llms/ollama"
	"github.com/tmc/langchaingo/prompts"
	"github.com/tmc/langchaingo/schema"
)

func createCapitalChain(llm llms.Model) chains.Chain {
	template := "What is the capital of {country}? Answer in one sentence."

	prompt := prompts.MustNewPromptTemplate(
		prompts.WithTemplate(template),
		prompts.WithInputVariables([]string{"country"}),
	)

	return chains.NewLLMChain(llm, prompt)
}

func main() {
	llm, err := ollama.New(
		ollama.WithModel("mistral"),
		ollama.WithServerURL("http://localhost:11434"),
	)
	if err != nil {
		log.Fatal(err)
	}

	ctx := context.Background()
	chain := createCapitalChain(llm)

	// Run the chain
	result, err := chains.Run(ctx, chain, "France")
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Response:")
	fmt.Println(result)
}

Chains make it easy to build complex workflows. You can combine multiple chains, add conditional logic, and handle errors gracefully. This is where LangChainGo really starts to shine.

Adding Memory to Conversations

For a chatbot, we need memory. LangChainGo provides a memory abstraction that tracks conversation history. Let’s set this up:

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/tmc/langchaingo/chains"
	"github.com/tmc/langchaingo/llms"
	"github.com/tmc/langchaingo/llms/ollama"
	"github.com/tmc/langchaingo/memory"
	"github.com/tmc/langchaingo/prompts"
)

func createChatChain(llm llms.Model, mem memory.Memory) chains.Chain {
	template := `You are a helpful AI assistant.
Current conversation:
{history}

User: {input}
Assistant:`

	prompt := prompts.MustNewPromptTemplate(
		prompts.WithTemplate(template),
		prompts.WithInputVariables([]string{"history", "input"}),
	)

	return chains.NewConversationChain(llm, mem, prompt)
}

func main() {
	llm, err := ollama.New(
		ollama.WithModel("mistral"),
		ollama.WithServerURL("http://localhost:11434"),
	)
	if err != nil {
		log.Fatal(err)
	}

	ctx := context.Background()

	// Create a buffer memory to store conversation history
	mem := memory.NewConversationBuffer()

	// Create the conversation chain
	chain := createChatChain(llm, mem)

	// Have a multi-turn conversation
	inputs := []string{
		"Hello! My name is Alice.",
		"What's my name?",
		"Tell me a joke.",
	}

	for _, input := range inputs {
		fmt.Printf("User: %s\n", input)

		result, err := chains.Run(ctx, chain, input)
		if err != nil {
			log.Fatal(err)
		}

		fmt.Printf("Assistant: %s\n\n", result)
	}
}

The ConversationBuffer memory stores all conversation history. There are other memory types available (like ConversationSummaryMemory which summarizes older messages), but this works great for getting started.

Building a Practical CLI Chatbot

Now let’s bring it all together and build a real chatbot that you can actually interact with. This will be a command-line application that maintains context across multiple turns:

package main

import (
	"bufio"
	"context"
	"fmt"
	"log"
	"os"
	"strings"

	"github.com/tmc/langchaingo/chains"
	"github.com/tmc/langchaingo/llms"
	"github.com/tmc/langchaingo/llms/ollama"
	"github.com/tmc/langchaingo/memory"
	"github.com/tmc/langchaingo/prompts"
)

func main() {
	// Initialize the LLM
	llm, err := ollama.New(
		ollama.WithModel("mistral"),
		ollama.WithServerURL("http://localhost:11434"),
	)
	if err != nil {
		log.Fatal("Failed to initialize Ollama:", err)
	}

	// Create conversation memory
	mem := memory.NewConversationBuffer()

	// Create the conversation chain
	template := `You are a helpful AI assistant. Be concise and friendly.

Conversation history:
{history}

User: {input}
Assistant:`

	prompt := prompts.MustNewPromptTemplate(
		prompts.WithTemplate(template),
		prompts.WithInputVariables([]string{"history", "input"}),
	)

	chain := chains.NewConversationChain(llm, mem, prompt)

	// Set up the reader for user input
	reader := bufio.NewReader(os.Stdin)
	ctx := context.Background()

	fmt.Println("Welcome to the LangChainGo Chatbot!")
	fmt.Println("Type 'exit' to quit.")
	fmt.Println()

	for {
		// Read user input
		fmt.Print("You: ")
		input, err := reader.ReadString('\n')
		if err != nil {
			log.Fatal(err)
		}

		input = strings.TrimSpace(input)

		// Check for exit command
		if strings.ToLower(input) == "exit" {
			fmt.Println("Goodbye!")
			break
		}

		if input == "" {
			continue
		}

		// Run the chain
		result, err := chains.Run(ctx, chain, input)
		if err != nil {
			fmt.Printf("Error: %v\n", err)
			continue
		}

		fmt.Printf("Bot: %s\n\n", strings.TrimSpace(result))
	}
}

To run this interactive chatbot:

go run main.go

Then you can have a natural conversation. The bot will remember what you’ve said in previous messages and respond contextually. Try it out! Ask it about something, then reference it later in the conversation. The magic of LangChainGo’s memory system will handle it.

Tips and Best Practices

As you build more sophisticated applications, keep these things in mind:

Prompt Engineering: The quality of your prompts directly impacts the quality of responses. Spend time crafting clear, specific prompts. Include examples or context when needed.

Token Management: LLMs charge by tokens. Be mindful of how much conversation history you’re storing in memory. Consider using ConversationSummaryMemory for long conversations.

Error Handling: Always check errors, especially when dealing with external services like Ollama. Implement retries for network failures.

Streaming: For better user experience with long responses, consider using streaming instead of waiting for the full response. LangChainGo supports this with callbacks.

Testing: Write tests for your chains and prompts. This helps ensure your application behaves as expected when you change prompts or swap LLMs.

Wrapping Up

LangChainGo brings the power of LangChain to Go developers, and we’ve just scratched the surface of what’s possible. We’ve learned how to:

  • Set up a Go project with LangChainGo
  • Connect to Ollama for local LLM inference
  • Use prompts and templates for reusable logic
  • Chain operations together
  • Add memory for stateful conversations
  • Build an interactive chatbot

From here, you can explore more advanced features like document loading, embeddings, vector stores, and agent patterns. The LangChainGo documentation and GitHub repository are great resources for diving deeper.

The beautiful thing about LangChainGo is that it lets you leverage the power of LLMs without leaving Go. You get the performance benefits of Go, the simplicity of the language, and access to cutting-edge AI capabilities. Pretty cool, right?

Go forth and build something amazing with LangChainGo!