“Ollama” is likely inspired by "llama", referencing Meta’s LLaMA (Large Language Model Meta AI), which is a family of open-source AI models.
Building with AI
I've found guardian's theknowledge series a great source for
football trivia's and interesting facts. Using the contents from the series(started in the year 2001), I had plan to create micro blogs
for my website tactification with the help of AI because manually creating them is an arduous task.
I trialled with openai's chatgpt's web interface to generate the content
for the articles. Results from chatgpt were impressive and I was happy with the content it generated.
However, it was not generating contents for all the articles I've passed as input.
So, I decided to generate content locally using ollama.
Ollama is a framework that allows to run and manage large language models on local machine.
I've read articles from Eli Bendersky's blog on how to use ollama. So I've followed the instructions from related
blog.
Apparently, it is also one of my initial attempt in golang too.
webscraping guardian's theknowledge series and running ollama
I've used python's beautifulsoup
library to extract the contents. Content scraped by the script is the input to ollama models to generate
micro blogs. Scraping the contents from the website was straightforward and deterministic.
As I've mentioned earlier, I've followed the instructions from Eli Bendersky's blog to install and run ollama.
I've downloaded and tried llama 3.1, mistral, deepseek-r1 models for generating the content. I've also downloaded
llama 3.3 model, a 70B parameter model and tried it. But it was too large for my machine to handle, so dropped it.
So far, I'm not satisfied with the content generated by the models compared to the results chatgpt has produced.
Seems, more fine tuning is required to get better results.
Golang code used to run ollama:
Golang Code to Run Ollama
Code
package main
import (
"context"
"log"
"os"
"io"
"fmt"
"time"
"encoding/json"
"github.com/tmc/langchaingo/llms"
"github.com/tmc/langchaingo/llms/ollama"
)
type Knowledge struct {
Contents []string
Url string
Datetime string
}
func main() {
var query string
var result
package main
import (
"context"
"log"
"os"
"io"
"fmt"
"time"
"encoding/json"
"github.com/tmc/langchaingo/llms"
"github.com/tmc/langchaingo/llms/ollama"
)
type Knowledge struct {
Contents []string
Url string
Datetime string
}
func main() {
var query string
var result string
var length int
var content []Knowledge
llm, err := ollama.New(ollama.WithModel("deepseek-r1"))
if err != nil {
log.Fatal(err)
}
micro_blogs, err := os.Create("micro_blogs_deepseek.txt")
if err != nil {
log.Fatal(err)
}
defer micro_blogs.Close()
// Open the football data file
file, err := os.ReadFile("guardian_knowledge.json")
if err != nil {
log.Fatalf("Failed to open file: %v", err)
}
prompt := "Please help me to generate a micro blog post from following url: "
err = json.Unmarshal(file, &content)
if err != nil {
fmt.Println("Error reading file:", err)
return
}
// Read each line (each football fact) and generate a tweet
for iter, trivia := range content {
fact_list := trivia.Url
fmt.Println("Url:", trivia.Url)
query = prompt + fact_list
result = generateTweet(llm, query, fact_list)
length, err = io.WriteString(micro_blogs, trivia.Url)
fmt.Printf("iter: %d length: %d\n", iter, length)
length, err = io.WriteString(micro_blogs, result)
fmt.Printf("iter: %d length: %d\n", iter, length)
}
}
// Function to send a request to Ollama and generate a tweet
func generateTweet(llm *ollama.LLM, query string, fact_list string) string {
start := time.Now()
ctx := context.Background()
completion, err := llms.GenerateFromSinglePrompt(ctx, llm, query)
if err != nil {
log.Fatal(err)
}
end := time.Since(start)
fmt.Println("elapsed time: n", end)
return completion
}