When AI Becomes Table Stakes by @ttunguz



Last week, I installed Github’s Copilot, a machine learning tool that helps engineers write software. I often code in R, Go, Ruby, markdown, bash and other languages to automate some task or update my CRM, so I was excited.

I typed in def get_tweets_for_user(username) in ruby. Copilot completed the entire function in less than 5 seconds, like a GMail smart reply but for programming. I entered my API credentials, executed the program, and it ran. 5 minutes’ work compressed to a tab-key-press and a copy/paste.

By the end of the day, Copilot had become essential – I won’t code without it. I suspect millions of others will feel the same way.

I use applied AI elsewhere. Two distinct machine learning systems have analyzed this blog post for grammatical errors, clichés, brevity, style, and weasel words.

Over the past decade, I’ve watched applied machine learning become table stakes for sales people at Chorus.ai. Chorus records calls between an account executive and prospect, then analyzes it for insight on style, structure, and content.

When IBM’s DeepBlue dueled Gary Kasparov in 1996 for the chess supremacy, players wondered if the computer sitting on the other side of the chessboard harkened the end of the game. Similar questions arose after AlphaGo. Today, top grandmasters spar with supercomputer-powered chess engines to improve their play.

Many articles were written twenty years ago about the potential demise of the human chessplayer. Those sentiments echo in the articles about the end of software engineering that popped up after the Copilot launch.

They have it wrong. AI enables us to focus to higher level tasks, not worrying about an errant semicolon, the syntax of a particular API, or taking notes during sales calls.

These assistants are the future of work, anticipating, suggesting, guiding, correcting – helping us accomplish more by abstracting away toil. That’s why they become table stakes – everyone needs them to keep pace.



Source link