ML with Dale Markowitz: GCPPodcast 194


[MUSIC PLAYING] AJA HAMMERLY: Hi, and
welcome to episode number 194 of the weekly
Google Cloud Platform Podcast. I’m Aja, and I’m here
with my colleague Jon. Hey, Jon. JON FOUST: Hey,
Aja, how’s it going? AJA HAMMERLY: Pretty good. Yourself? JON FOUST: Pretty good. I’m pretty happy to
finally be done at PAX. It was a really
great experience, and I got to see you. AJA HAMMERLY: Yeah, we did get
to hang out here in Seattle and enjoy a brief Seattle
summer day together. JON FOUST: And the new
office is pretty nice. AJA HAMMERLY: Yeah, our new
Google Cloud office in Seattle is A plus. It is amazing. So this week on
the podcast, we’re going to be talking to one of
our fellow developer advocates working on Cloud,
Dale, and we’re going to be talking
more about ML. JON FOUST: Sounds cool. Dale’s a really, really
great friend of mine. Sad to see her go, because
she’s moved on from the New York office to another office. But we did share one last
moment, and it was pretty good. AJA HAMMERLY: Yay. And then later, after
our interview with Dale, we will have a question of
the week about containers because everyone’s
excited about containers. But first, cool
thing of the week. [MUSIC PLAYING] You’ve actually got more cool
things of the week than I do, Jon. So why don’t you start? JON FOUST: The first thing
is a blog post written by Simon Zeltser, DPE
on the Cloud team, and it is about building
a development workflow for Cloud Code on a Pixelbook. And I found this
blog post really well because when I’m
teaching my students and they’re working
with Pixelbooks, it’s very hard to set up
an environment for software development on a Pixelbook. But this blog post
goes into depth about how to set
up the environment, get started with visual code,
get started with Cloud Code. And also, he also takes a step
into setting up containers and working with
Docker and everything. So it’s pretty cool
and pretty awesome. AJA HAMMERLY: That sounds cool. I’ve been meaning to spend
more time with Cloud Code, so I will totally
check that out. My cool thing of the
week is an article I ran across
yesterday on Twitter. The article itself
is on “Medium,” and it will be linked
in our show notes, but it’s about how
the ideas of Agile are actually ideas that feminism
and the civil rights movement and the progressive
movement and other movements have been pushing
for years and years around iterating, failing
fast, cooperation, seeking to hear all voices. And I found it
really educational because it tied
together some stuff that had been banging
around in my head apparently for the last year. Because I woke up this
morning, the social medias reminded me that I actually
wrote a social media post about this exact same issue
exactly one year ago today. So clearly, this has been
on my mind for a while. But the author of this blog
post does it much better than I ever will. And so I’d recommend
checking out the show notes and going and giving it a read. JON FOUST: That sounds awesome. It’s very interesting to see
the similarities between Agile and feminism. So I’m kind of curious
what that really is about. So I’ll probably give
it a really good read. AJA HAMMERLY: I recommend it. Do you have any more cool
things for us this week? JON FOUST: I do. So considering that this
is an ML episode with Dale, I decided to pick up
something about the AI Hub. And they have a new web page
and improved collaboration features, which
is really awesome because it allows you to– as soon as you
land on a page, you get to see a lot of commonly
used models and things that you can share with
people on social media. They can be publicly
accessible to everyone. So the AI Hub is doing
really, really great things. So if you are really big
into ML, then give it a look. AJA HAMMERLY: That
sounds awesome, and it is a lovely segue into
our interview with Dale, where we’re going to talk about
NLP, Translate, practicing AI, and kind of how AI fits
into the bigger world. We’re looking forward to
sharing all that with y’all. JON FOUST: All right. So let’s give Dale
the spotlight, and let’s go to our episode. [MUSIC PLAYING] So in this episode of
the Google Cloud podcast, we are hanging out
with Dale Markowitz, who is an awesome, awesome
teammate of ours in developer relations. So Dale, would you like
to introduce yourself? DALE MARKOWITZ: Hi, I’m Dale. I’m an applied AI engineer
and developer advocate, and I work on all of the
tools in Google Cloud that have to do with machine
learning and words, like natural language
processing, translation, speech, stuff like that. JON FOUST: Awesome. And can you tell us a little
bit about what you do? DALE MARKOWITZ: Let’s see. I do lots of things. So I work with engineers
internal and external to Google to help
them understand how they can use machine
learning, how they can use natural language processing. I also spend a lot of
time with engineers that don’t have a
background in data science to help them understand how
they can use machine learning and how they can understand
how well a model’s performing and what’s an
applicable use case. JON FOUST: So yeah, that
sounds pretty interesting that you work with people with
no background on data science. Because, well, a lot
of people probably have known that our job
as developer advocates are to communicate for
our developers externally and internally, meaning that
we talk to our internal teams, and we speak externally
to developers about cool new features. But you speak with
people that have no background in data science. So that’s kind of interesting
because you’re technically advocating for ML as
opposed to our product. So it’s kind of interesting. So can you tell
us how someone who doesn’t have a background
in ML would probably get started, in your
opinion, or go about getting into learning about ML? DALE MARKOWITZ: Yeah. It’s kind of
interesting because I don’t know that my job could
have existed five years ago. Because back then– for
example, when I first started learning about
machine learning– there was a pretty
high barrier to entry. And you had to be a
programmer to get started, but you also had to have a good
understanding of data science. And just the tools that
were available to you five years ago I think
were pretty rough for new developers. Like, for example, I think
TensorFlow came out in 2015. So this is a tool for
building neural networks. It’s a really popular tool for
building neural networks today. But back when it
came out, it was really tough to get started with
because you had to understand how to program, you
had to understand how neural networks worked
at a pretty low level, but you also had to
understand distributed computing because TensorFlow
itself has this complicated distributed graph architecture. Anyway, what I’m
trying to say is it was kind of hard to
get started in machine learning a couple of years ago. But there has been so
much progress today, like, for example, TensorFlow
getting a lot easier to use and more user-friendly. And a lot of the tools that
I work with here at Google really make it so
that you can even build a neural network if
you don’t have a data science background. So now we have to start
thinking about, OK, well, now software developers that
don’t have all the data science training start
building neural nets, like, what are their concerns? What do they have to be wary of? AJA HAMMERLY: So that’s so cool. At the top of the
show, actually, you said something about
natural language processing. Can you give me the 5,000-foot
view of what that is? DALE MARKOWITZ: Sure. Natural language processing
is the study of algorithms that deal with processing text. And usually today, that means
basically the intersection of machine learning
and text processing. AJA HAMMERLY: So
what kind of things are natural language processing
used for in the real world? DALE MARKOWITZ: So
in the world, people use natural language
processing for lots of things. So one really common use
case is you have lots of text from your customers posting
about your company on Twitter or filling out a feedback
form about your product. Or maybe you have a call center
where people are calling in to ask questions, and you use
something like Speech-To-Text to transcribe those
call center transcripts. And then, you want
to analyze to see what people were talking about. So one common natural
language processing problem is you have a lot
of text and we want to see what topics are
being talked about, like topic modeling, or sort
a bunch of pieces of text into categories. Another common task
you might imagine is something called
sentiment analysis. So this is about
what emotions people are expressing when they write. So are people saying positive
things or negative things about your product on Twitter? AJA HAMMERLY: That sounds
super cool and super powerful. JON FOUST: I want to
talk a little bit more about those people that don’t
have degrees a little bit. I was kind of curious– we have people who
have degrees in ML, and their job is to
work with models. But those people without degrees
and they work on our models, creating models is a
very wishy-washy subject because you can create a model
that can be completely biased. How does someone
without a degree create a model that is deemed
unbiased in your opinion? DALE MARKOWITZ:
That’s a good question because it is true that if you
are doing a normal software engineering task then– not all the time,
but a lot of time– you can immediately tell if
it’s broken because your program crashes in a really
horrendous way. Unfortunately, with
machine learning, you can be working on
something for months, and then you put
it into production, and you don’t even realize
it’s broken for a while. And it’s broken in a
way that is surprising or that was hard for
you to anticipate. Like, for example, a lot of
times we want to label things. So we have pictures
of dogs and cats, and we build a model that
automatically labels them as dogs or cats. So you might think that
a really good model just automates this process. But actually, a really
good model only correctly labels these things
most of the time. But all models make mistakes. So this is just a natural thing. And when you’re
developing, first of all, you have to expect
this to happen, and you have to make sure
that the consequences of your model making a
mistake are not that bad. This is why I hope that most
of the people that I’m talking to that are trying out machine
learning for the first time are planning on deploying their
models in non-crucial settings, that they’re not
building navigation systems for airplanes or
analyzing medical scans and then making life
or death decisions. So the first thing I think
is to not choose applications where the consequences of making
a mistake are really deadly. Then the next thing that
we would tell people is to consider the
different types of errors that a model can make because
there are different ways model can be wrong. For example, with the
cats and dogs case, you could label a cat as
a dog or a dog as a cat. Let’s say that we’re
looking at scans of lungs, and we want to see if we can
detect pneumonia automatically. Machine learning is very good
at understanding medical scans. Actually, a lot of
models we have today can even beat human
practitioners. But let’s say we’re
doing this task. So the first thing
that a model can do is it can give you
a false positive. So this would mean it
looks at a scan of lungs, and it thinks that you have
a pneumonia when you really don’t. That error is
definitely horrifying, but maybe it’s not
that bad because it means that you, I don’t know,
spend more money on scans and doctors only to find
out that you didn’t really have a problem. But then there’s another type of
error called a false negative, where you really
did have pneumonia, but the model says
that you didn’t. And that’s much more
costly because now you’ve just overlooked a potentially
life threatening or very serious disease. So the next thing
for practitioners is to think about all these
different types of errors and understand what
you would do in each of these different cases. So another way that
models can be wrong is that they can
somehow introduce bias. And there are many ways
that this can happen, but usually it happens
because the examples that you’ve shown your
model, the training data, is in some way biased. So for example,
in my opinion, one of the worst ways that machine
learning has been applied is to something like
predictive policing. So you want to somehow
collect data about people and predict who’s
likely to commit a crime or who’s likely to
re-commit a crime. The problem here
is that the people that get pulled over for crimes
in the first place is biased. And in fact, years and years of
policing has innate human bias. So if you train a
model on that data, you’re just going to
reflect those biases. The right solution here is
to choose your data sets carefully, so that they reflect
all of the different groups that you can think
of that there might be a difference in performance. So let’s say you’re training
a vision model to identify people’s faces. You want to make sure that
your data set has people of all different races, all
different ages, sexes, outfits, et cetera. And if you’re really
concerned about this, then you want to look at
how well the model does specifically on these groups. So break your data into
these different groups, and make sure that
performance is pretty similar across all of them. This is something we’re really
interested in at Google. It’s something that we’re really
spending a lot of time thinking about how we can make this
more clear to the user and how we can integrate
it into our products. AJA HAMMERLY: So do you
have any guidelines? I mean, you mentioned
looking at all the ways that it can go wrong. Do you have any
guidelines for folks who are just getting
into machine learning or aren’t necessarily
data scientists by trade but are using some
of the products that make machine learning easier? What kinds of stuff
should they be thinking about when they’re
working on their models? DALE MARKOWITZ: Sure. So when you first
learn machine learning, you’ll have to
learn how to parse a bunch of different
metrics to just baseline evaluate how well the model is
doing– precision and recall. I want to talk about
that because you probably have to read a
blog post and spend a couple of minutes thinking
about it before it really sinks in. But the strongest advice
that I have probably is that you have this data
that use to train a model. And then, you try to somehow
deploy it into production to make predictions on the fly. And a lot of times,
people get caught up on the data set that they use
to train the model doesn’t actually reflect the data that
they’re making predictions on. So maybe I use a
bunch of stock photos to train a clothing classifier. But then, when I actually
use my clothing app and people are taking pictures
of themselves, well suddenly, those photos look a lot
different than stock photos. So that’s a major
source of error. And also, sometimes, the
distribution of data can shift. So in the beginning, your
model works really well. And then style trends change. And suddenly, it doesn’t
recognize that tutus are in or something. I would really advise that when
you deploy a model, before you let it go into the wild, you
watch how it performs on data. You don’t let it
make predictions, but you sort of simulate what
predictions it would make. And make sure that
you’re OK with those. And also, understand
that you do have to be continually monitoring
your model because of the fact that the real
world could change. AJA HAMMERLY: So
it sounds like when you’re using machine
learning, you need to constantly
be on top of it to make sure that
it’s telling you what you think it’s telling you? DALE MARKOWITZ: To make
sure it’s telling you what you want it
to be telling you. AJA HAMMERLY: OK. JON FOUST: So I’m
kind of curious. Since you have to
continue monitoring it, you can’t anticipate how
your users are actually going to use your models. How long do you think it
actually takes for you to– the word “perfect” is going to
be in air quotes right now– but how long do you think it
takes to create an accepted, fully-functioning model? DALE MARKOWITZ: It
depends on the task. It depends on your standards for
how good it should be working. For example, there’s
also a lot of thought around designing interfaces so
that users that are interacting with machine learning models are
aware that there’s going to be some probabilistic component. JON FOUST: So I’m
kind of curious. When do you think it is
actually acceptable to use ML? Because you can use ML
for a bunch of things to solve a bunch of issues. And you can probably
use ML to do things that probably aren’t ethical. But when do you find it
acceptable to use ML? DALE MARKOWITZ:
Like I said, I think that there are lots of cases. I wouldn’t use machine learning
for anything life threatening. I wouldn’t use machine learning
for anything that determines the fate of people’s lives. There are sort of lots of, I
think, low-hanging tasks that– one of my colleagues
described machine learning as being kind of a
little bit of a dumb intern. [LAUGHTER AND APPLAUSE] It takes a lot of
training, and it can only do very menial
tasks, but there are lots of little tasks like
that in the world that have yet to be automated that
it would be great if they were. Just for example,
journalists, they interview people for
stories, and they collect hours and hours of
transcripts, and they sit down and they transcribe them. So journalists, a
lot of their time is spent just literally
copying an audio file to text. That’s something that we could
do with machine learning, and it’s something that I don’t
think anyone would complain about as an application. AJA HAMMERLY: That
makes a lot of sense. So you mentioned that you’re
into natural language. What kind of natural language
tools are available on GCP? Because I’ve played a little
bit with some of the tools, trying to detect if one of
my friends was making a pun or not because he is a
very punny human being. But I feel like
there’s a lot of tools that I don’t even know about. So what kinds of tools do we
have around natural language on GCP? DALE MARKOWITZ:
So glad you asked. With machine learning, it goes
sort of like on this spectrum of easy-to-get-started
to advanced. So the easiest tools
that you could use would be something like
the Natural Language API. So this is an API
used like any other. The input is text, and it can
do things like detect sentiment, detect entities,
like people’s names, places, prices, dates,
addresses, and so on. It can also detect
the subject of text. So if you uploaded a “New York
Times” article about politics, it could detect that from an
article about sports and so on. So that’s the easiest entry
way into natural language processing. Then a step– I
wouldn’t say it’s harder, but maybe
more sophisticated is a tool like AutoML
Natural Language. It’s a tool that allows
you to train your own model on your own data from scratch,
and you do this with a GUI, and it’s quite easy to use. And there are a bunch
of different models that we can build for text. So the first one is the
classification model. We worked with a recipe
company, and they wanted to be able to
categorize their recipes into– I don’t know– category like
southern food, Mexican food. But you could upload
a bunch of recipes, have humans label them,
and then train a model to do this automatically. So that’s classification. You can also build a
custom sentiment model. So again, this is
about emotions. Is this piece of text expressing
positive or negative feelings? And within AutoML, you
can sort of fine tune this model to your
own specific data. So for example, the
Natural Language API can tell that if I say,
I hate oranges, that I’m expressing negative sentiment. But if I said instead, there
is no leg room in coach, then a model might not know if
that is a bad thing because leg room is such a specific niche
thing to flying on an airplane. So you might train
a custom model that’s more aware of your
specific industry terminology. Also, a new AutoML feature
that was released recently was Custom Entity Extraction. So this is kind of
like classification, but you can use it to
identify words within text. So for example, you
could upload a W-4 and train a model to extract
all of the form fields that you’d filled out. I built a model to
classify patents. So you have a bunch of
patent applications as PDFs, and you identify different
fields in the patent. So that’s AutoML. And then there are
lots of options for people that are more sort
of savvy with data science. So if you’re willing to
get dirty with TensorFlow, then types of natural language
models you could build out are endless, and we have
lots of infrastructure for helping you train
and deploy those models. AJA HAMMERLY: Cool. JON FOUST: So Dale,
you also mentioned you work in translation. So is there anything
new that’s come about with the Translation API? DALE MARKOWITZ: Yeah, actually. We just recently
released some updates. So if you’ve never used
the Translation API, it’s sort of like Google
Translate but in API format. So you can use our
models within your app. So the Translate API
works, I think, very well. However, there are
some cases where it doesn’t do as great of a job. For example, Google Cloud has
lots of documentation online and lots of really
specific terminology, like Kubernetes, or Cloud
Run, or TensorFlow, or all of this stuff that may be
a generic translation model wouldn’t have seen before. So we have a couple
of features that can help you improve
your translations for this sort of custom domain. One thing, if you want to build
a really high quality model, is AutoML translate. So I just talked about
AutoML Natural Language, but AutoML Translate allows you
to upload lots of translations that you’ve done on
whatever your data set is. So for example, with
the GCP documentation, I might upload
English documentation and Japanese documentation. And then, AutoML will train
a model that automatically does those translations
that includes this knowledge it learns from
your own specific data set. So this builds very
accurate models, but it takes a lot of data
to train a model like this. But we also just
released a new feature to allow you to customize
your translations just by using the API. And that’s this feature
called Glossary. So the idea is– let’s say that I’m going to
translate lots of things that also mention Google,
but there are some words that I know exactly how I
want them to be translated. So I want Kubernetes to be
translated into Japanese in this very specific way
that is not the default way that the
Translation API does it. The new feature is
called Glossary. So it allows us to upload
a list of word pairs in one language and then the
one you’re translating to, so that the API will, before
it does any translation, just make these one-for-one
replacements using your preferred translation. And then, it will translate
the rest of the sentence. AJA HAMMERLY: That’s
actually super powerful because I don’t know
of any domain that doesn’t have domain-specific
vocabulary or uses words in ways that normal
people don’t use words. And sometimes, you
have terms that need to be translated
in very specific ways. So that’s super cool. I can imagine a lot of people
would get use out of that. DALE MARKOWITZ:
Yeah, we actually think about it a
lot for translating our own documentation. AJA HAMMERLY: Best use cases
are the ones that we experience, too. Sometimes. That’s cool. JON FOUST: So Dale, you and I
have been really good friends in the office, and we’ve
been talking about a couple of things that you’ve
been working on. So you mentioned that you
work on this project called Google News Lab. So I’m curious to learn a
little bit more about it, and I’m pretty sure
our listeners are, too. DALE MARKOWITZ: Yeah, sure. So ever since I joined Google,
I knew about the Google News Lab and thought it sounded
really awesome. It’s this team here that
works with newsrooms to help them advance
journalism through technology. So they’ll work with
different newsrooms and help them build out
technology that helps with investigative reporting. And what I do with them is
I’m sort of like a machine learning consultant. So I’ll talk to
newsrooms about how they might use machine
learning in a project or something like this. AJA HAMMERLY: Do you have any
examples you can tell us about? Because I’m not sure I can
imagine what that looks like. DALE MARKOWITZ: Yeah, sure. I can tell you about a project
that we just worked on. So I recently finished
up working on a project with a newspaper in Mexico
called “El Universal.” They wanted to tackle a really
complicated issue, which is that in Mexico, there
is a lot of homicides. And unfortunately, some areas
are so dangerous that reporters can’t even cover
homicides because they are at risk in doing so. These we call news
deserts, the areas that are too dangerous to report on. So we worked with this
newspaper to build a system that could identify
these places where stories weren’t written on crimes. So to do that, Google News
has this comprehensive stream of news stories written
all around the world. So we looked at all of the
news stories written in Mexico, and we whittled it
down to stories that were written about homicides. And then, we sort of
used machine learning, and we used AutoML,
actually, to figure out which of those stories would
be relevant for coverage. So they were talking
about a crime committed on a specific day in
a specific region. We were able to sort of use
this to build this map of places where crimes were
being committed, but they weren’t
being reported about. So it’s a pretty heavy,
sobering subject, but it was needed to be able to
help out with this reporting. JON FOUST: This seems
really interesting. You’ve mentioned
that you used AutoML, but specifically, what
product in AutoML? Like, NLP, Translation? DALE MARKOWITZ: Yeah, we used
Natural Language Processing API and AutoML. So we used the API to do
things like extract locations mentioned in news stories. And then, we used AutoML to
build a classifier that figured out which articles were relevant
for our reporting versus which weren’t. AJA HAMMERLY: You
said that it’s heavy, but crime reporting is
super, super valuable. People need to know what’s
going on in their neighborhoods. And I wish that this wasn’t
a product that was needed, but at the same time,
it sounds like it’s solving a real problem
to ensure that people can get news relevant
to them in spite of what may be going on around them. That’s super cool. DALE MARKOWITZ: I hope so. AJA HAMMERLY: So we’re
almost running out of time, but is there anything
that we haven’t asked you about that you
want to talk about or that you think our listeners
would be interested in? DALE MARKOWITZ: Oh, yeah. Actually, there is one thing. I want to talk about my
favorite service on GCP that I feel like
is kind of hidden and people don’t know exists. But if you are in
machine learning, it’s a super useful tool. It’s probably the
biggest unblocker. Our new Data Labeling Service. So the hardest part of machine
learning, in my opinion, is not learning what a neural
network is, or accuracy, or all of this stuff. It’s actually finding a
labeled data set that’s appropriate for your task. So you can go on Kaggle and try
to find one that’s suitable, but if you want to, for example,
classify your own company’s internal documents, then for
training and machine learning models, you need to
label those documents. And sometimes, you need to
create hundreds or thousands of labeled examples, which
means that you need to employ a lot of interns probably. Except until now. Because now, you can use
Google’s Data Labeling Service. You basically write
up a description of how you want your
data labeled, you send it to the service, and then,
in a couple of days, the data set comes back
labeled by human beings. And it is designed to work
easily with all of our AutoMLs. So you can create
a data set and then immediately plug the output
into AutoML to build a model. So that’s my favorite new thing. AJA HAMMERLY: That’s
awesome, really powerful for folks who may not have the
expertise or experience to go find the data sets
that they need, or maybe their data set is
proprietary for some reason or another. That’s cool. DALE MARKOWITZ: Yeah. JON FOUST: So
Dale, really quick, we like to ask our guests
to tell us something really interesting or
cool that was built using machine learning or
natural language processing. And I’m just curious. Do you have anything that
you’ve seen developed, built, or something that you’ve
built that you would like to share with our listeners? DALE MARKOWITZ: I have built
lots of little hacks myself. Like, for example, I used
AutoML to predict which posts would be top posts on Reddit. That’s a really hard task,
as you might imagine, but it did a pretty good job. Or predict, for example,
whether a comment that somebody writes on a forum is toxic. I think my favorite thing is
that once you have a data set, it’s very easy to build a model. So I’m always sort of trying
to see if I can predict things with AutoML. JON FOUST: That’s
awesome because you can imagine a lot of
people go on forums or they read
comments on YouTube. And you can imagine
a lot of them are probably deemed like
offensive or something. So if you can use
NLP to actually scour the millions of comments,
possibly either censor or even complete erase those
type of comments, it builds for a
better community. DALE MARKOWITZ: Yeah. Also, one of my favorite
things that somebody built is something that
you built, Jon. Can I talk about it? JON FOUST: Yeah, sure. DALE MARKOWITZ: So Jon
built something for gamers where they can speak
like they would to other people in the games. And then, their comments will
be translated in real time, so that people in lots
of different languages can insult each other
over the internet. I mean collaborate. I mean collaborate. AJA HAMMERLY: Yeah, I don’t
think insulting each other is the intended purpose of that. [LAUGHTER] JON FOUST: It’s been
a real fun project. And thanks to your
help, hopefully, it’ll be a very big success when
I give a talk about it. DALE MARKOWITZ: Yeah,
it’s super cool. AJA HAMMERLY: Thank you so
much for coming to chat with us today, Dale. If folks want to find
out more or follow the work you’re doing,
is there a good way that they can find out
more about your work or follow you on the internet? DALE MARKOWITZ: Sure. I’m always willing to give
people my Twitter handle. It’s @dalequark, like
the subatomic particle. Yeah, and I try to keep a
lot of things on “Medium.” So I’ll be there too. AJA HAMMERLY: Awesome. JON FOUST: And will you
be making any appearances anywhere? DALE MARKOWITZ: The
next thing that I’m doing that’s reasonably close
is I’m going to the Minnesota and Madison DevFests. They’re at the end of September. So if anyone’s there,
make sure you say hi. JON FOUST: Well, thanks
for joining us, Dale. It’s been a
pleasure, and I can’t wait to see what other cool
things you build with ML. DALE MARKOWITZ: Thanks. See you, Jon and Aja. JON FOUST: Thanks, Dale, for
the very, very, very in-depth conversation about ML. And now that we’re
wrapping up, we’re going to get into our
question of the week. [MUSIC PLAYING] How many different ways can
you run a container on GCP? AJA HAMMERLY: Yeah. I love this question, Jon. I think we did it
a previous time that I was on the podcast a year
or probably more than a year ago, but the answer has changed. [MYSTERIOUS MUSIC] So by my count, there
are at least five ways to run a container on GCP. The obvious one, Google
Kubernetes Engine, GKE. [PING] Also, now we have Cloud Run, [PING] which is another way
of running containers. But there are a couple of ways
most people don’t know about. First of all, App Engine
Flexible Environment can actually run a container. [PING] It’s based around containers. And so if you have a
container, that is an option. And Compute Engine– [PING] –and this is the one that even
a lot of Googlers don’t know about– can actually start
a Compute Engine VM from a container image. And then the fifth
one is you can always use a VM as a computer and set
up your own either Kubernetes environment or
Docker environment to run a container,
that way not using any of the managed services. [PING] So at my best count, there
are at least five ways to run a container on GCP. JON FOUST: That’s awesome. And it’s kind of interesting
to see the evolution of GCP and how it supports containers
because how many ways were there about a
year ago would you say? AJA HAMMERLY: At my entire
time at Google, pretty much we’ve had four. Cloud Run is the new one. And Cloud Run is
cool and awesome, and folks should
go check it out. JON FOUST: Awesome. So Aja, I guess the next thing
is, where are you going to be? Where are you traveling to? Or are you doing anything? AJA HAMMERLY: I am
going to be at home, and I am so very excited to be
at home curled up with my cats. [SNORING AND PURRING SOUNDS] Hanging out. Travel schedule is pretty light
for the next couple weeks. How about you, Jon? JON FOUST: Well,
next week I will be at the internal Google
Games Summit, which should be a lot of fun. And after that, I will
be traveling to Montreal for three or four
days to spend time with some really close
friends from school and celebrate my
brother’s wedding, which is happening next month. AJA HAMMERLY: Awesome. I have been really
enjoying watching how the work at
Google around gaming has become more centralized
and becoming a thing. It makes me happy because
gaming is awesome. [GAMING MUSIC] JON FOUST: Definitely. And it’s really good to
connect with everybody, just talk about gaming
as one cohesive group. So it’s going to be
really great to catch up with a lot of folks. AJA HAMMERLY: Yeah. Hope you all have a good time. I think that’s it
for our episode. JON FOUST: Well, thank you
all for listening this week, and we hope to
see you next week. See you, Aja. AJA HAMMERLY: Bye, Jon. [MUSIC PLAYING]

Leave a Reply

Your email address will not be published. Required fields are marked *