In preparation for his talk at the upcoming Codemotion Deep Learning virtual conference next week, I spoke to Luiz GUstavo (Gus) Martins, TensorFlow Developer Advocate, at Google.
He explained that his main role is to help developers use TensorFlow the best way possible. “I help them achieve their goals and I also bring feedback to TensorFlow’s team to keep the product improving and addressing the developer’s needs. As a developer advocate, I end up being a TensorFlow developer myself because this is a great way to understand the developer’s needs. I have to understand how and what they’re trying to do. This is very cool because I have the opportunity to learn a lot. Sometimes it’ll be too hard for me, but you just keep studying and you’ll get there.”
TensorFlow is an end-to-end open-source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML, and developers easily build and deploy ML-powered applications.
The Genesis of Tensor Flow
Google, as a company, is always working on improving their products. They started using machine learning internally, and as Gus explained, “Google Brain’s team started working on Machine Learning and they also built a framework to help solve research and business-related problems. We developed TensorFlow internally and decided to open source it in 2015. With this, the ML community would have a great tool to work and help push the field forward.”
Last week TensorFlow surpassed 100 million installs – remarkable for a product that’s only existed for five years! Especially as Luiz notes that the product is fairly niche, and designed originally for those specifically working in machine learning, not generalist devs.
Gus notes that “As people start to find out more about machine learning, they start to think, ‘oh, maybe I should use this in my field, maybe it can help with my problems.’ ” This was the case when the idea about running ML models on mobile phones came up. As these devices get more and more powerful, running a ML model on-device would be possible if the framework gave good support for it. This led to the TensorFlow Lite release in 2017. “That was something that people hadn’t even thought about yet, machine learning on the phone with all the hardware restrictions? That’s when the framework was optimized to run on phones with all the processor and memory challenges that they have. This enabled unique use cases with privacy and offline execution in mind.”
Always Evolving
Since then, TensorFlow has been evolving. Gus details: “The development of TensorFlow Lite to work also with microcontrollers and all kinds of embedded devices. TensorFlow.js was introduced to give JavaScript developers also access to the framework and enable running ML models on the browser.”
Google also created TensorFlow Extended (TFX), an end-to-end platform for deploying production ML pipelines in response to the particular needs of enterprise:
“There are different challenges that you have to address when you want to deploy ML models in production. TFX brings the tools to meet the needs that we faced internally and are now also available to everyone.”
A healthy mix of TensorFlow users
TensorFlow users are a healthy mix of academics, researchers, enterprise, startups, and hobbyists. As Gus notes, “We have users using Tensorflow in all kinds of projects. We talk to universities, as well as people just starting to learn machine learning. I talk to people that are like myself in the past when I was an Android developer. We have researchers who use TensorFlow to develop their projects and of course we have tools to support them like TensorBoard and Tensorflow Hub. We also have, of course, big enterprise in all kinds of sectors using TensorFlow. And then there are the hobbyists who we love because they always come up with great ideas, and they build things super fast.” Gus gave the example of a self-driving car project developed using TensorFlow by the community.
How can a newbie get started?
I was interested to know about the ease of entry for people new to Machine Learning. Gus explained, “We are making our tools, documentation and guides easier and easier to use. There’s, for example, the Teachable Machine project where you can just go to a web page and train a model. If you want more resources to have a better understanding, there’s our Youtube channel with lots of content, from beginner to advanced levels. There’s also more in depth content in the Machine Learning Crash course page. It’s a great time to learn.”
The benefits of Colab
One great resource is Google’s Colab, which allows you to write and execute Python code in your browser, with zero configuration required, free access to GPUs, and easy sharing. With Colab you can import an image dataset, train an image classifier on it, and evaluate the model, all in just a few lines of code. Colab notebooks execute code on Google’s cloud servers, meaning you can leverage the power of Google hardware, including GPUs and TPUs, regardless of the power of your machine. All you need is a browser and an internet connection.
Gus shared: “I was talking to someone this week in a region where the internet is not so fast. Machine learning requires a lot of data, a lot of downloads. So I was talking to them, and they were happy that we have Colab. Because they could access all their resources on the browser. They didn’t download to their machines, and they could keep working and learning.”
If you’d like to learn more about Machine Learning and TensorFlow, join us next week on May 27th for our virtual Deep Learning Conference where a fantastic cohort of speakers including Gus will be speaking about advances in technological research and real-world applications.