Writing

Optimization Google Search Using Goroutines in Go

04/04/25

When you’re building a search system—whether it’s for a fun side project, an internal tool, or just to better understand concurrency—speed and responsiveness are everything. Luckily, Go makes it super easy to write fast, concurrent code thanks to its goroutines.

In this post, we’ll walk through building a simplified, mock version of Google Search in Go. We'll start with a basic, slow version and gradually improve it using goroutines, timeouts, and even replica-based redundancy. Let's dive in!

Starting with the Basics

First, let’s set up a little framework to simulate different kinds of search—like web, image, and video.

type Result string

type Search func(query string) Result

func fakeSearch(kind string) Search {
	return func(query string) Result {
		time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
		return Result(fmt.Sprintf("%s result for %q\n", kind, query))
	}
}

This fakeSearch function pretends to do real work by sleeping for a random time (up to 100ms) before returning a result. It gives us a way to simulate slow or unpredictable responses, just like in real-world systems.

Google Search 1.0 – The Simple Way

func Google(query string) (results []Result) {
	results = append(results, fakeSearch("web")(query))
	results = append(results, fakeSearch("image")(query))
	results = append(results, fakeSearch("video")(query))

	return
}

This version runs all the searches one after the other—first web, then image, then video. It works fine, but it's not exactly fast. If each search takes 100ms, your user might be waiting up to 300ms total. Not great.

Google Search 2.0 – Going Concurrent

Let’s make it faster by running the searches at the same time.

func Google(query string) (results []Result) {
	c := make(chan Result, 3)

	go func() { c <- fakeSearch("web")(query) }()
	go func() { c <- fakeSearch("image")(query) }()
	go func() { c <- fakeSearch("video")(query) }()

	for i := 0; i < cap(c); i++ {
		result := <-c
		results = append(results, result)
	}

	return
}

Now we’re cooking! Each search runs in its own goroutine, and we collect the results as they come in. Total search time? Just the time it takes for the slowest search to finish—much better!

Google Search 2.1 – Adding Timeouts

What if one of the searches is super slow or hangs forever? Let's make sure our app stays snappy no matter what.

func Google(query string) (results []Result) {
	c := make(chan Result, 3)

	go func() { c <- fakeSearch("web")(query) }()
	go func() { c <- fakeSearch("image")(query) }()
	go func() { c <- fakeSearch("video")(query) }()

	timeout := time.After(80 * time.Millisecond)

	for i := 0; i < cap(c); i++ {
		select {
		case result := <-c:
			results = append(results, result)
		case <-timeout:
			fmt.Println("timed out")
			results = append(results, "timeout")
			return
		}
	}
	return
}

Here, we use time.After to add a timeout. If any search takes longer than 80ms, we stop waiting and return what we have so far—plus a "timeout" placeholder. That way, the system stays responsive even if something goes wrong.

Google Search 3.0 – Smarter with Replicas

Let’s take it up a notch. What if we could avoid slow servers altogether by querying multiple replicas of a search service and taking the fastest response?

Enter the First function:

func First(query string, replicas ...Search) (result Result) {
	c := make(chan Result, len(replicas))

	for _, replica := range replicas {
		go func(replica Search) {
			c <- replica(query)
		}(replica)
	}

	timeout := time.After(80 * time.Millisecond)

	select {
	case result = <-c:
	case <-timeout:
		fmt.Println("timed out")
		result = "timeout"
	}

	return
}

Now, we can use the First function to perform searches with replicas. This function takes a query and a list of search replicas, launching each replica in its own goroutine. It waits for the first result or a timeout, ensuring that we get a response even if some replicas are slow or unresponsive.

Google Search 3.1 – Putting It All Together

func Google(query string) (results []Result) {
    c := make(chan Result, 3)

    go func() { c <- First(query, fakeSearch("web"), fakeSearch("web"), fakeSearch("web")) }()
    go func() { c <- First(query, fakeSearch("image"), fakeSearch("image"), fakeSearch("image")) }()
    go func() { c <- First(query, fakeSearch("video"), fakeSearch("video"), fakeSearch("video")) }()

    timeout := time.After(80 * time.Millisecond)

    for i := 0; i < cap(c); i++ {
        select {
        case result := <-c:
            results = append(results, result)
        case <-timeout:
            fmt.Println("timed out")
            results = append(results, "timeout")
            return
        }
    }

    return
}

This version of Google uses the First function to query multiple replicas for each search type. It waits for the first result or a timeout, ensuring that we get a response even if some replicas are slow or unresponsive.

Conclusion

With just a few lines of Go, we turned a slow, sequential search into a blazing-fast, concurrent, fault-tolerant system. Goroutines and channels make this kind of architecture both simple and powerful. By using goroutines, we can run multiple searches at the same time, significantly reducing the total search time. Adding timeouts ensures that our system remains responsive even if some searches take too long. Finally, using replicas allows us to avoid slow servers and get the best possible results.