Search Google developer videos

Please note: I am currently rebuilding this app: the URLs below will not work.

I built a prototype API and application for searching Google developer video transcripts and metadata:


Sample search app for Google developer videos


Screenshot of JSON response to


1. I work for a search company.

2. More and more content online is video.

3. Video is inherently opaque to textual search.

How can we navigate and search media content? Not least the thousands of videos on Google Developers, Android Developers and Chrome Developers.

Thankfully Google has a secret weapon: a crack team of transcribers who produce highly accurate, timecoded captions for Google videos. (Not  to be confused with the sometimes slightly surreal automated alternative.)

Some people prefer to access information by reading text rather than watching videos, so I also enabled access to downloadable transcripts:

The transcripts have Google Translate built in, so you can choose read them in a different language. Caption highlighting is synchronised with video playback — and you can tap or click on any part of a transcript to navigate through the video.

Web page screenshot showing transcript of Google Developer video HTTP 203: Pointer Events

HTTP 203: Pointer Events translated into French
(Apologies if the translation is dodgy…)

I hope the app and API are useful. As ever, your feedback would be much appreciated.

With some minor tweaking you could use my app and API to build search for any YouTube channels that have manually captioned videos — just tweak the channels in the code.

A quick hat-tip to the world’s transcribers, captioners and reporters.

These unsung heroes have amazing, hard-won skills! If you’ve ever seen a captioner at work at a live event, you’ll understand what a complex and difficult job it is.

Why are captions important? Because they give more people better access to media: those of us with impaired hearing, or whose first language isn’t the language of the video we’re watching.

Likewise, respect is due to the archivists who catalogue video after broadcast — the art of ‘shotlisting’. Without shotlists, history is lost (and it’s hard to resell footage…) Where was that sequence of Donald Rumsfeld getting down with Saddam Hussein? Shotlisters work long days faithfully cataloguing and timecoding news stories, often at double speed, one package (or rush) after another. One of my ex-colleagues at ITN, Jude Cowan, wrote a brilliant and moving book of poetry on the subject.


Search for something:

Transcripts for two or more videos:,3i9WFgMuKHs

Link to a query:

Data and transcript for a video:

Transcript only: or or

Multiple values — comma, semicolon or pipe delimiter, spaces OK:,iZZdhTUP5qg, iZZdhTUP5qg

Search any field for a query, spaces OK — can be a bit slow: 203

More shortcuts: c for captions, s for speaker — speakers are parsed from transcript:

Can specify ranges for commentCount, dislikeCount, favoriteCount, likeCount, viewCount:>10000

You can use any of these values to specify order:>10000&sort=viewCount

Add a hyphen for descending order:>10000&sort=-viewCount

Show items with titles that include ‘Android’ or

Items with speakers that include Reto and a title that includes Android:

Spaces are OK: Meier&title=Android

More complex stuff works too: Wear|description=Android Wear)&speakers=Reto Wear|description=Android Wear)&speakers=[Reto,Wayne]”Android Wear”|title=WebRTC Wear|description=Android Wear)&speakers=Timothy

Fuzzy matching — with apologies to Wayne, whose name I generally misspell :):

For dates, use ‘from’ and ‘to’, which can cope with anything Date can handle: // assumes text-only is a month this year 2014 // midnight, 1 January to midnight, 1 January

Get total for any quantity field — this query returns the total number of views for all videos:

Get total for any query and quantity field:

Get all individual values for any quantity field for all videos — returns an object keyed by amounts, values are number of occurrences for each amount:

Get all individual values for any quantity field for any query:

Build a chart from results (views for videos that mention ‘Chrome’):

The code

Available from GitHub:

Truth be told, it’s a bit of a dog’s dinner. I wrote most of the app and API on long flights and in the small hours under the influence of jet lag. E&OE! The JavaScript is a little… procedural, and I hereby pledge that I will mend my ways.

Issues and pull requests welcome.

There are three code directories:

app: the web client (as used at This will automatically choose the local Node middle layer (below) if run from localhost.

get: middle layer Node app to get data from the database. For testing, you can run this locally with the app running from localhost. I run the live version on Nodejitsu at, for queries like this: (same as

put: Node app to get YouTube data and transcripts, massage the response and put it in a CouchDB database at


Why didn’t you use Node on Google?

I’d like to, but Nodejitsu is very easy:

$ npm install jitsu
$ jitsu login
$ jitsu deploy
$ 🙂

Why didn’t you use Firebase?

I used Cloudant, which has Lucene search built in (and is based on CouchDB, and is very easy to use). Firebase can now be used with Elasticsearch, but when I started that required extra installation.

Why didn’t you just use MySQL or …

I probably should have. In fact, an SQL database with Lucene for full text search might have been much more appropriate than CouchDB. (This kind of search is actually much easier with Firebase now.)

How was CouchDB?

Good in some ways, and quick. In particular, the JSON/HTTP/REST styles feels fits well with Node/JavaScript development.

Problems came with full text search:
• Full text search is not built into CouchDB, though it can be added on with Lucene or other search engines.
• CouchDB searches return entire documents, with no ‘partial’ results. (In my case, a document represents all data for a video.) So, for example, if I want to return only captions that include ‘Android Wear’, I need to retrieve all the documents (in their entirety) that have captions that mention ‘Android Wear’ then filter.
• CouchDB search queries cannot be combined: for example, ‘get me all videos from 2013 with WebRTC in the title’. So, again, you have to add your own filter.

How big is the database?

Around 250MB, but more like 150MB without transcripts: the transcript for each document is really just a convenience to make it quick and simple to retrieve human readable transcripts, and replicates the captions (with a few tweaks).

How often is the data updated?

At present I’m updating the database manually to avoid code changes breaking it, but once the code settles down I’ll run automatic updates every (say) 30 minutes.

Why didn’t you use io.js?

No big reason. Node.js has been around longer.

How many documents have transcripts?

Last time I looked: 4312 videos, 3550 with transcripts.

How did you get the speaker names?

With a bit of sneaky regexing these are parsed from transcripts.

NB: speaker names are not parsable for many captions, so speaker search results may not always be complete.

Why are caption matches returned as span elements?

The primary use for the caption matches is within HTML markup. Returning JSON for each span might be neater and less verbose, but for most apps that would entail extra effort transforming to HTML. I could be persuaded otherwise.

How long does it take to store and index data?

This depends a lot on connectivity. From work, the app gets and inserts the video data and transcripts in under three minutes. From home, it takes about 10 minutes.

Indexing takes about 10 minutes.

What build tools do you use?

I use JSCS and JSHint with grunt and githooks to force validation on commit.

Chrome JSON formatting extensions, and were very useful.


  • I haven’t written any unit tests – yet.
  • Error handling is minimal.
  • Code refactoring is in the pipeline.
  • I don’t know a lot about working with sockets on Node, so a lot of the code has to be deliberately synchronous to avoid errors. I’m sure someone could help me…
  • The API is HTTP only as yet.
  • Use the official YouTube Captions API.
  • I wanted to use Firebase, but when I started it was a bit tricky to implement full-text search, so I opted for Cloudant. It’s now pretty simple to use Firebase with ElasticSearch, so I’ll port the data at some stage.
  • Database updates are done manually at the moment — mostly because I’m worried about messing up the sample app. Easily automated.

About Sam Dutton

I am a Developer Advocate for Google Chrome. I grew up in rural South Australia, went to university in Sydney, and have lived since 1986 in London, England. Twitter: @SW12
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.