Robocalls, Facebook Passwords – CS50 Podcast, Ep. 0

Robocalls, Facebook Passwords – CS50 Podcast, Ep. 0

[MUSIC PLAYING] DAVID MALAN: This is CS50. Hello, world. This is the CS50 podcast
my name is David Malan, and this is Episode
Zero, our very first, and I’m joined here by CS50’s own Colt– COLTON OGDEN: Yep. [LAUGHTER] Colton Odgen. This is an interesting
new direction that we’re going. DAVID MALAN: Yeah, it’s one in
which we clearly haven’t rehearsed. COLTON OGDEN: Yeah. DAVID MALAN: So, but what we thought
we’d do with the CS50 podcast is really focus on the week’s current events
as it relates to technology, use this as an opportunity to
talk about the implications of various technologies, and really
explain things as it comes up, but really in a non-visual way. And so, perhaps, I
think the topics Colton and I’ll hit on here will focus on
things you, yourself, might have read in the news that maybe
didn’t register necessarily or maybe you didn’t really understand
how it pertains to technologies that you, yourself, use. COLTON OGDEN: Yeah, and I think
that’s ties as well to prior, when we did CS50 live, and this
was kind of the same idea. DAVID MALAN: Yeah, absolutely. Whereas CS50 Live, when we did it
on video, was much more visual– we prepared slides, we actually
looked at sample videos and such– here, we thought we’d really
try to focus on ideas. And it’ll be up to you to decide
if this works well or not well, but we come prepared with a look
at some of the past week’s news. And why don’t we get right into it? COLTON OGDEN: Yeah, absolutely. One of the things I
noticed, actually, is– I put together this list of
topics, but the one thing that I didn’t put in here that you
actually found and put in here, today, was something
about Facebook passwords. DAVID MALAN: Yeah, so a website named
Krebs on Security, the author of this, was contacted apparently by
some employee– presumably and a current employee of
Facebook– who revealed to him that during some recent
audit of their security processes, they discovered that
for like seven years, since 2012, had one or more
processes inside of Facebook been storing passwords– users’
passwords, like yours and mine potentially, in the clear,
so to speak, clear text, not cipher text, which means unencrypted–
in some database or some file somewhere. COLTON OGDEN: Typically,
people will use some sort of hashing algorithm to store things
cryptographically and much more securely? DAVID MALAN: Indeed, even like ROT13,
like rotate every character 13 places, would have been arguably more secure. And there’s not a huge amount
of technical detail out there. If you go to,
you can actually dig up the blog post itself. And then Facebook actually
did respond, and I think there’s a link in Krebs on
Security to the Facebook announcement. But to be honest, the
Facebook announcement which is on,
pretty, to be honest, it’s pretty nondescript
and really doesn’t– I mean, it’s kind of disingenuous. They seem to use this as an opportunity
to talk about best practices when it comes to passwords and all
of the various other mechanisms that they have in place to
help you secure your password. And yet, they really kind
of didn’t address the topic at hand, which is, well,
despite all of those mechanisms, you were storing our passwords
in the clear, or at least millions of Facebook users,
particularly on Facebook Light, lighter-weight version of the app
that’s useful in low bandwidth locations or where bandwidth is
very expensive or slow. COLTON OGDEN: So this strikes you
sort of as an opportunity for them to, what, hand wave over the issue
and sort of distract people? Is that sort of how it– this rubs you? DAVID MALAN: Yeah, maybe. I think they, you know,
acknowledged the issue, but then used this as an
opportunity to emphasize all of the things that are being done well. And that’s fine, but I think
the world is done a disservice when companies aren’t just
candid with their mea culpas and what they got wrong. I think there’s learning
opportunities and, as I read this, there’s really little for
me as a technical person or as an aspiring program
to really learn from, other than the high order bit
which is, encrypt your passwords. But how did this happen? What are the processes that failed? I mean if companies like
Facebook can’t get this right, how can little old me,
an aspiring programmer, get these kinds of details right? I wonder. COLTON OGDEN: So an article
more about how they failed and how they could address it, and
how other companies could address it, you think that would’ve
been more productive? DAVID MALAN: I think so. I mean, postmortems, as they’re called
in many contexts, including in tech, and I’ve always really admired
companies that when they do have some significant mistake or
human error, where they own up to it and they explain in technical
terms exactly what went wrong. They can still have a
more layman’s explanation of the problem too, where
most people might only take an interest in that level of detail. But for the technophiles
and for the students and the aspiring technophiles out
there, I think it’s just appreciated. And these are such
teachable moments and all that– but I would respect the
persons, the company all the more if they really just explained what it
is they failed so that we can all learn from it and not repeat those mistakes. COLTON OGDEN: If a large company like
Facebook is doing something like this, how prevalent do you think this
practice is in the real world? DAVID MALAN: Yeah. Oh my God. I mean, probably
frighteningly common, and it’s just if you have fewer users
or fewer eyes on the company, you probably just notice
these things less frequently. But I do think things are changing. I mean with laws like GDPR in
the EU, the European Union, I think there’s increased pressure
on companies now, increased legal pressure, on them to disclose
when these kinds of things happen, to impose penalties when
it does, to therefore discourage this from even happening. And you know, I’m wondering why this
audit detected this in 2019, and not in 2012 or 2013 or 2014 and so forth. COLTON OGDEN: GDPR, did
that happened back in 2012? Oh no, that was– DAVID MALAN: No, this was recent. COLTON OGDEN: That one Came onto force– OK. DAVID MALAN: Recent months,
actually, has this been rolled out. COLTON OGDEN: Was this–
is this related at all to the proliferation, now, of cookie
messages that you see on websites? DAVID MALAN: That’s US-specific, where
I believe it’s now being enforced. Because that actually has been
around for quite some time in Europe. Anytime you took your
laptop abroad, for instance, would you notice that almost
every darn site asks you, hey, can we store cookies. And honestly, that’s a very annoying
and almost silly manifestation of it because the reality
is, as you know, I mean the web doesn’t work
without cookies or at least dynamic applications don’t work. And anyone who’s taken CS50 or
who’s and a bit of web programming, really, in any language,
know that the only way to maintain state in most HTTP-based
applications is with cookies. So, I mean, we’ve created
a culture where people just dismiss yet another message, and I don’t
think that’s a net positive either. COLTON OGDEN: I think I see a lot,
too, of the messages that say, by continuing to use this
site, you acknowledge that we have access to whatever
information, using cookies, and so on. So I almost think that they do
it already and sort of legally can get away with it by
having this message visible. DAVID MALAN: Yeah, I mean,
it’s like cigarette ads which, abroad, as well, there was– before the US, there was
much more of, I presume, law around having to have very
scary warnings on packages. And companies somewhat cleverly,
but somewhat tragically, kind of steered into that and really owned
that and put the scariest of messages. And it– you almost become desensitized
to it because it’s just so silly and it’s so over the top,
you know, smoking kills. And then, here’s the price
tag and here’s the brand name. Like, you start to look past
those kinds of details too, so I’m not sure even that
is all that effective. But someone who’s looked
at this and studied it can perhaps attest quantitatively
just how effective it’s been. COLTON OGDEN: Yeah, indeed. Well, scary to know
that our passwords may have been reflected visibly
on somebody’s server, a big website like Facebook. Related to that, another of the topics
that I sort of dug into a little bit yesterday– or not
yesterday, a few days ago, was Gmail Confidential
Mode, a new feature that they’re starting to roll out. DAVID MALAN: Yeah. Yeah, I saw that, just in March,
one of the articles on Google’s blog discussed this. What, so do you understand what the– what they’re offering now as a service? COLTON OGDEN: I have to reread
back through the article. So from what I understood, though,
it was encrypting not P2P emails, but encrypting the emails sort
of towards a sort of proxy, towards a center point,
and then forwarding that encrypted email to the other
person on the receiving end. But I remember reading in the article
that P2P encryption wasn’t something that they were actually going
to start implementing just yet. DAVID MALAN: Yeah, and this is– I mean, this is kind of the illusion
of security or confidentiality. In fact, I was just
reading, after you sent me this link on the EFF’s website, the
Electronic Frontier Foundation who are really very progressively-minded,
security-conscious, privacy-conscious individuals as a group, they noted
how this really isn’t confidential. Google, of course, still has access
to the plain text of your email. They don’t claim to be
encrypting it Peer-to-Peer, so they’re not being disingenuous. But I think they, too, are
sort of creating and implying a property, confidentiality,
that isn’t really there. And what this does,
for those unfamiliar, is when you send an email in
Gmail, if you or your company enables this feature– I think it might still be in
beta mode or in trial mode. COLTON OGDEN: I don’t think it’s
officially fully deployed yet, but yeah, [INAUDIBLE]. DAVID MALAN: Yeah, so you can opt into
it if you have a corporate account, for instance. It gives you an additional, like,
Lock icon on the Compose window for an email, where you can say that
this message expires, sort of James Bond style, after some number of hours. You can add an SMS code to it
so the human who is receiving it has to type in a code that
they get on their cell phone. And so it also prevents users
from forwarding it, for instance, therefore accidentally or intentionally
sending it to someone else. But there’s the fundamental
issue because you start to condition people,
potentially, into thinking, oh, this is confidential. No one can see the message that
I’m sending or that I’ve received. And that’s just baloney, right? And you or I could take
out our cell phone, right now and not screenshot, but
photograph anything on our screen. You could certainly highlight and
Copy-Paste it into some other email. And so I think these kinds of features
are dangerous if users don’t really understand what’s going on. And honestly, this is
going to be a perfect topic in CS50 itself or CS50 for MBAs or
for JDs at Harvard’s graduate schools. Because if you really push on
this, what does confidential mean? Well, not really much. You’re just kind of– it’s more of a
social contract between two people. Like, OK, OK, I won’t forward this. It’s just raising the bar. It’s not stopping anything. COLTON OGDEN: Part of this kind
of reminds me, too, of the point you like to mention
in most of the courses that you teach in that security
doesn’t really mean much just by virtue of seeing something. Somebody sees a padlock icon
in their browser in, let’s say,, that
doesn’t necessarily mean that anything that they see is secure. DAVID MALAN: Yeah, well that too. I mean, there too, we humans
learned, years ago, well maybe we shouldn’t be putting
padlocks in places that have no technical meaning
for exactly that reason. People just assume it means something
that it doesn’t, so we seem doomed, as humans, to repeat these mistakes. And this isn’t to say that I
think this is a bad feature. Frankly, I wish that I could somehow
signal to recipients of emails I send, sometimes, please don’t
forward this to someone else because it’s not going to reflect well
or it’s going to sound overly harsh or whatever the email is. You sometimes don’t want
other people to see it, even if it’s not the end of
the world if they actually do. But short of writing in all caps,
like, do not forward this email, at the very start of your message,
most people might not realize. So I think having a
software mechanism that says don’t, not forwardable, isn’t bad. But, you know, it should probably be
like, please don’t forward, and not imply that this is confidential
and no one else is going to see it. COLTON OGDEN: Do you think that
there is some sort of risk involved in making these emails
self-destructive, in as much as maybe it will bite people sort of
in the future when maybe they want to look back on records
that are important like this? DAVID MALAN: Could be. I mean, there too, I suspect there
are business motivations for this, for retention policies, where there
might be laws or policies in place where companies do or don’t
want to keep information around because it can come back to bite them. And so maybe it’s a good thing if emails
do expire after some amount of time, so long as that’s within
the letter of the law. But I presume it’s
motivated, in part, by that. So this is a software
technique that helps with that. And so, in that sense,
you know, confidential does have that kind of
meaning, but it’s not secure and I worry that you put a padlock
on it– that doesn’t necessarily mean to people what you think. I mean, so many people,
and kids especially, might think or once thought that
Snapchat messages are indeed ephemeral and they’ll disappear. But ah, I mean, they’re
still on the servers. COLTON OGDEN: Undoubtedly. DAVID MALAN: They can be on the servers. You can snap– screenshot them or
record them with another device. So I think we do humans a disservice
if we’re not really upfront as to what a feature means
and how it works, and I think we should label things
appropriately so as to not oversell them. COLTON OGDEN: And it sort of
takes unfortunate advantage of those who are not as technically
literate as well, allowing them to sort of– or
at least capitalizing on people taking for granted these
things that they assume to be true. DAVID MALAN: Yeah. I mean, we’ve been doing
this, honestly, as humans, for like, what, 20, 30 years with DOS. You might recall that when you format a
hard drive, which generally means to– kind of means to erase it and prepare
it to have something new installed on it, the command, back then, when
you used to delete it or fdisk it or whatever it was, was are
you sure you want to proceed? This will erase the entire
disk, something like that, and I think it actually was in all caps. But it was false, technically. Right? All it would do is rewrite
part of the headers on disk, but it would leave all of your
zeros and ones from previous files there in place. And there, too, we said it would delete
or erase information, but it doesn’t. And so, for years maybe
to this day, do people assume that when you delete
something from your Mac or PC or empty the Recycle Bin
or whatnot, that it’s gone? But anyone who’s taken CS50
knows that’s not the case. I mean, we have students recover data
in their forensics homework alone. COLTON OGDEN: You have a
background, certainly, in this too. You did this for a few years. DAVID MALAN: Yeah, or a
couple, a year or two. Yeah, yeah, in graduate school. COLTON OGDEN: If you were to advise
our listeners on the best way to sort of format their hard
drive and avoid this fallacy, what would be your suggestion? DAVID MALAN: Drop it in a volcano. [LAUGHTER] COLTON OGDEN: So then,
are you insinuating that there is no truly safe
way to clean a hard drive? DAVID MALAN: No, no. Well, in software, it’s risky. COLTON OGDEN: In software. DAVID MALAN: I think if you
really want peace of mind because you have personal documents,
financial documents, family documents, whatever it is that you want to destroy,
physical destruction is probably the most safe. And there are companies that allow
you to physically destroy hard drives. They drill holes in it or
they crush it or whatnot, or you can take out a hammer and
try to break through the device. But it’s difficult, as
we’ve seen in class when we’ve disassembled things, you and I,
for CS50’s Introduction to Technology class. It’s hard just to get
the damn screws open. So that’s the most robust
way, is physical destruction. You can wipe the disk in software. Frankly, it tends not
to be terribly easy. It’s easier with mechanical drives,
hard disk drives that spin around. But with SSDs, the Solid State Drives
that are purely electronic these days, it’s even harder. Because those things, in
a nutshell, are designed to only have certain parts of them
written to a finite number of times. And eventually, the hard drive,
after a certain number of writes or after a certain amount of time, will
stop using certain parts of the disks. And that does mean you have a slightly
less space available, potentially, but it ensures that your
data’s still intact. That means that even if you
try to overwrite that data, it’s never going to get written
to because the device isn’t going to write to it anymore. COLTON OGDEN: Oh, I see. It closes off certain sectors– DAVID MALAN: Exactly. COLTON OGDEN: –that might
have data written in. That’s interesting. I didn’t know that. DAVID MALAN: So you’re better off just
destroying that disk, at that point, too. So it’s wasteful
unfortunately, financially, but if you want true peace of mind, you
shouldn’t just wipe it with software. You shouldn’t hand it off to
someone and assume that Best Buy or whatever company is doing it for
you is going to do it properly as well. You should probably just
remove the device if you can, destroy it, and sell the
rest of the equipment. COLTON OGDEN: I think
this is reflected too, in Mr. Robot, where he
microwaves an SD card that he trusts I’ll get off of his– DAVID MALAN: Did he? I Don’t know if I saw
that episode, then. COLTON OGDEN: This was, I
think, the second episode. DAVID MALAN: That’s probably
not the right way to do it. That’s probably just very dangerous. COLTON OGDEN: That’s probably very–
yeah, I think it exploded in the video, but yeah. DAVID MALAN: You don’t put metal
thing– for our CS50 listeners out there, don’t put metal
things in microwaves. COLTON OGDEN: Yeah,
generally not advisable. DAVID MALAN: No, I
think never advisable. [LAUGHTER] COLTON OGDEN: Yeah,
so off of the– well, I guess sort of related
to the topic of security, there was an article
recently published on Gizmodo about how the FCC admitted in
court that it can’t track who submits fake comments to its database. DAVID MALAN: Yeah, I was reading that. And as best I could tell, it sounded
like they had a web-based form to solicit feedback on, what was it,
net neutrality or some topic like that, and they claimed that
they couldn’t trace who it was because apparently there were
millions of bogus comments generated by script kiddies or just
adversaries who wrote programs to just submit comments again
and again and again and again. And as best I could
infer, it sounds like they were weren’t logging, maybe,
who they were coming from, maybe it’s IP address. It sounded like maybe they didn’t
even have a CAPTCHA in place to sort of force a presumed human
to answer some challenge like a math problem or what is this
blurry text or click all of the icons that have crosswalks
in them or something like that. COLTON OGDEN: Right. DAVID MALAN: And so they just
don’t have much metadata, it seemed, about who the users were. So short of looking at the
text that was submitted alone, it sounds like they can’t
necessarily filter things out. It’s a little strange
to me because it sounded like they do have IP addresses, at
least in the article that I read, and the FCC doesn’t want to release
that for reasons of privacy. But you could certainly filter
out a good amount of the traffic, probably, if it all seems to
be coming from the same IP. I’m guessing many of the
adversaries weren’t as thoughtful as to use hundreds or
thousands of different IPs, so that’s a little curious too. COLTON OGDEN: Is it all
related to onion routing? And this is more of my sort of
lack of knowledge of Tor and onion routing, but is this sort of how
onion routing works in that you can spoof your IP from a million locations? DAVID MALAN: Not even spoof your IP. You just really send the data
through an anonymized network such that it appears to be– that it is coming from
someone else that’s not you. So yeah, that’s an option. I’ve not used that kind
of software in years or looked very closely at how it’s
advanced, but that’s the general idea. Like, you just get together with a
large enough group of other people who you presumably don’t know,
so n is large, so to speak, and all these computers are
running the same software. And even though you might originate a
message in email or form submission, that information gets routed through
n minus 1 other people, or some subset thereof, so that you’re kind
of covering your tracks. It’s like in the movies, right,
when they show a map of the world and like the bad guys’ data is
going from here to here to here and a red line is bouncing
all over the world. That’s pretty silly, but it’s
actually that kind of idea. You just don’t have
software that visualizes it. COLTON OGDEN: And that’s why it looks–
that’s why they call it onion routing, because it’s like layers of an
onion, kind of going all around? DAVID MALAN: Oh is it? I never thought about it. COLTON OGDEN: I thought that that
was why it was called onion routing. DAVID MALAN: Maybe. That sounds pretty compelling. So sure, yes. COLTON OGDEN: They
apparently, per the article, the API logs contain
dozens of IP addresses that belong to groups that uploaded
millions of comments combined. So to your point, it does sound
like that indeed is what happened. DAVID MALAN: Oh, so that’s
presumably how they know minimally that there were bogus comments? COLTON OGDEN: Yeah. DAVID MALAN: And but
hard to distinguish maybe some of the signal from the noise. COLTON OGDEN: Folks
concerns were that people were sort of creating
these bogus comments that were propaganda, essentially malicious–
maliciously-oriented comments. DAVID MALAN: Yeah, well
this is pretty stupid then, sounding then because,
honestly, there are so many like available APIs via
which you can at least raise the barrier to adversaries. Right, using CAPTCHAs so
that, theoretically, you can’t just write a program to answer
those kinds of challenge questions; a human actually has to do it. So you might get bogus submissions,
but hopefully not thousands or millions of them. COLTON OGDEN: Yeah. No, it sounded more like a technical– I might not want to– I don’t want to stretch
my words here– but it sounded like there was a little bit of
potential technical illiteracy involved at least in this. DAVID MALAN: Could be. COLTON OGDEN: Potentially. DAVID MALAN: It could be. COLTON OGDEN: I want to try to
sound as diplomatic as possible. DAVID MALAN: Good thing they’re making
all these decisions around technology. COLTON OGDEN: Ah, yeah, exactly. Right? And I have a picture– OK, I’m not going to go
in that direction, but– DAVID MALAN: I don’t think we can
show pictures on this podcast, though. COLTON OGDEN: Another
topic sort of related to this was– and John
Oliver sort of did a skit on this related to
robocalls– is, well, robocalls. For those that– do you
want to maybe explain what robocalls are for our audience? DAVID MALAN: Yeah. I mean, a robocall is like a
call from a robot, so to speak. Really, a piece of software that’s
pretending to dial the phone, but is doing it all
programmatically through software. And it’s usually because they
want to sell you something or it’s an advertisement or
it’s a survey or they want to trick you into giving
your social security number or that you owe taxes. I mean, they can be used for any number
of things and, sometimes, good things. You might get a reminder from a robocall
from like an airline saying hey, your flight has been delayed an hour. That’s useful, and
you might invite that. But robocalls have a bad rap
because they’re often unsolicited and because I have not signed
up for someone to call me. And indeed, these have been
increasing in frequency for me too, on my cell phone in particular,
which theoretically is unlisted. And I added the Do Not
Track thing, years ago, but that’s really just the honor system. You don’t have to honor
people who are on those lists. COLTON OGDEN: They just supposedly
have to check the list, right, but they can still call you afterwards? DAVID MALAN: Yeah, and I mean certainly
if the problem is with bad actors, then, by definition, those people
aren’t respecting these lists in the first place. So it’s the good people who you might
want to hear from who you’re not because they are honoring
the Do Not Call lists. COLTON OGDEN: I’ve noticed that
I’ve received a great many as well. Most of them from– DAVID MALAN: Oh, yeah,
sorry about those. COLTON OGDEN: Most of them from 949,
which is where I grew up in California, and that’s where the bulk of all
the messages are coming from. DAVID MALAN: But well, per–
seem to be coming from. I’ve noticed this too. I get them from 617, which
is Boston’s area code, too. They’re doing that on purpose. I just read this, and it
makes perfect sense, now, in retrospect why I keep seeing
the same prefix in these numbers. because they’re trying
to trick you and me into thinking that, oh, this
is from a neighbor or someone I know in my locality. No it’s just another obnoxious
technique, to be honest. COLTON OGDEN: I live
in Massachusetts now, so I know that it’s not a neighbor,
definitely, if they’re 949. DAVID MALAN: Yeah. COLTON OGDEN: Likely. DAVID MALAN: Well, the thing is,
I don’t know anyone’s number, now. So if I don’t, it’s not in my contacts,
I know it’s probably a robocall. So no, it’s really awful
and, I mean, I think one of the primary reasons this
is becoming even more of a problem is that making calls is so darn cheap. I mean, you and I have
experimented with Twilio, which is a nice service that has, I
think, a free tier but a paid tier too where you can automate phone
calls, hopefully for good purposes. And I was just reading on their
website that they actually deliberately, though this certainly
is a business advantage for them too, charge minimally by the
minute, not by the second, because they want to charge even
potential adversaries at least 60 seconds for the call. Though, of course, this
means that if you and I are writing an app that just needs
a few seconds of airtime, we’re overpaying for it. But it’s a hard problem because
calls are just so cheap. And this is why spam has
so proliferated, right? Because it’s close to zero cents to
even send a bogus email, these days. And so those two are
dominating the internet, too. And thankfully, companies
like Google have been pretty good at filtering it out. You know, we don’t really have a
middleman filtering out our phone calls, and it’s unclear if you’d
want a middleman– software picking up the phone
figuring out if it’s is legit and then connecting them to you. COLTON OGDEN: Right. DAVID MALAN: It feels a little
invasive and a time-consuming, too. COLTON OGDEN: And it’s
kind of alarming just how easy it is for your average person,
your average beginning programmer to set up an automated robocall system. This was illustrated, and again,
back to the John Oliver segment, this was illustrated on there
where they had literally had– they were showing a clip of somebody
who wrote a little command line script or something like that. And even John Oliver made
light of it in this skit where he said that his tech person
took only 15 minutes to sort of bomb the FCC with phone calls. But I mean, the demonstration
showed writing a simple script, 20 phones just light up on the table. And this can be scaled and, you know,
however large you want to go with it. DAVID MALAN: No. And, fun fact, I actually did this
in CS50 once, a few years ago, and have not done it since because
this blew up massively on me. Long story short– and we have video
footage of this if you dig through– several years ago, maybe if
it’s 2018, most recently, it’s probably 2014, give or take. In one of the lectures, mid-semester,
we were talking about web programming and APIs and I wrote a script in
advance to send a message via text– well, technically, via email to text. It was sent through
what’s called an email to SMS gateway that would send a message
to every CS50 student in the room. And at the time, I
foolishly thought it would be cute to say something
like, where are you? Why aren’t you in class? Question mark. And the joke was supposed to
be, because if anyone were, you know, cutting class
that day and weren’t there they’d get this message
from CS50’s bot thinking, oh my God, they know I’m not there. When, really, everyone else
in the classroom was in on it because they saw me running the program
and they knew what was going to happen. And it was pretty cool in
that, all of the sudden, a whole bunch of people
in the room started getting text messages with
this, I thought, funny message. But I had a stupid bug in my
code, and essentially my loop sent one text message,
the first iteration; Then two text messages,
the second iteration; then three text messages,
the third iteration. Whereby the previous recipients would
get another and another and another because I essentially kept appending
to an array or to a list of recipients instead of blowing away the
previous recipient list. COLTON OGDEN: It’s like
a factorial operation. DAVID MALAN: Well, a
geometric series, technically. COLTON OGDEN: [INAUDIBLE]. DAVID MALAN: Or if you did it– I did, I think I did out the math. If I had not hit
Control-C pretty quickly to cancel or to interrupt the process,
I would have sent 20,000 text messages. And they were going out quickly. And I felt horrible because this was
enough years ago where some people were still paying for text messaging plans. It wasn’t unlimited, which is pretty
common, at least in the US these days, to just have unlimited texts
or iMessage or whatever. So, you know, this could have been
costing students $0.10 to $0.25 or whatever. So we offered to
compensate anyone for this, and I did have to single out a $20 bill
I think to one student whose phone I had overwhelmed. But, there too, it was also, phones were
old enough that they only had finite– well, they always had finite memory. They had terribly little
memory, and so when you get a whole bunch of text
messages, back in the day, it would push out older text
messages and I felt awful about that. COLTON OGDEN: Oh. DAVID MALAN: Kind of
overwhelming people’s memory. So anyhow, this is only
to say that even hopefully good people with good intentions can use
robocalls or robotexting accidentally for ill. And if you’re trying to
do that deliberately, maliciously, it’s just so darn easy. COLTON OGDEN: So solutions to this then? DAVID MALAN: Don’t let me
in front of a keyboard. [LAUGHTER] COLTON OGDEN: Do we– so there is a
little bit of reading I was doing, and it might have been
in this same article, but cryptographically
signing phone calls, is this something that
you think is possible? DAVID MALAN: Yeah. I mean, I don’t know terribly
much about the phone industry other than it’s pretty
backwards or dated in terms of how it’s all implemented. I mean, I’m sure this is solvable. But the catch is how do you roll it out
when you have old-school copper phone lines, when you have all of us using
cell phones on different carriers? It just feels like a very
hard coordination problem. And honestly, now that data plans are
so omnipresent and we decreasingly need to use voice, per se– you can use Voice over IP, so to speak– you know, I wouldn’t be surprised
if we don’t fix the phone industry, but we instead replace it with some
equivalent of WhatsApp or Facebook Messenger or Skype or Signal
or any number of tools that communicate voice,
but over software. And at that point, then
yes, you can authenticate. COLTON OGDEN: OK, that makes sense. And especially if this keeps scaling,
I feel like this is an eventuality. DAVID MALAN: I imagine, yeah. I mean, even now, right, like I don’t
get calls via any of those apps– well, some of them, the
Facebook ones I do– from people that aren’t
in your contacts. Sometimes, it just goes to
your other folder or whatnot. But I’m pretty sure
you can prevent calls from people who weren’t on your
whitelist on those apps like Signal and WhatsApp that do use
end-to-end encryption. COLTON OGDEN: Sure, yeah. That makes sense. That makes sense. DAVID MALAN: So we shall see. COLTON OGDEN: There is a– so away from the, I
guess, the security, which has been a major theme
of the podcast today, towards something a little bit different
actually– and this is pretty cool and I’m, particularly for
me because I’m into games– but Google actually announced
a brand-new streaming service that people are really talking about. DAVID MALAN: Yeah, that’s
really interesting. You probably know more about this world
than I do, since I am a fan of the NES, the original. [LAUGHTER] COLTON OGDEN: Well, it’s
called Stadia, and I’ve done a little bit of
reading on it, not terribly, because there’s actually not
that much about it, right now. DAVID MALAN: OK. COLTON OGDEN: Because it just was
announced maybe two or three days ago and, actually, Karim, one of our
team, actually kindly showed it to me because I wasn’t aware. This is going on at a live event. DAVID MALAN: OK. COLTON OGDEN: But it’s essentially
an idea that’s been done before. The companies have done this
sort of, we process all the games and then we stream the video
signal to you and you play, and there’s this back and forth. My initial sort of qualm about
this is that we’re fundamentally dealing with streaming latency. DAVID MALAN: Yeah, of course. COLTON OGDEN: And it’s– I find it highly unlikely that we can–
especially for a geographically distant locations amongst servers
and amongst consumers– that we can deal with less
than 13 milliseconds of latency in between a given frame and
the input on someone’s machine. DAVID MALAN: Maybe right now,
but this seems inevitable. So I kind of give Google credit for
being a little bleeding edge here. Like, this probably won’t
work well for many people. But it feels inevitable, right? Like eventually, we’ll have
so much bandwidth and so low latency that these kinds of things seem
inevitable to me, these applications. So I’m kind of comfortable with
it being a bit bleeding edge, especially if it maybe has sort of
lower quality graphics, more Nintendo style than Xbox style which– or at
least with the Wii, the original Wii, was like a design decision. I think it could kind of work, and I’m
very curious to see how well it works. But, yeah, I mean, even
latency, we for CS50’s IDE and for the sandbox
tool and the lab tool that to support X-based
applications, which is the windowing system for
Linux, the graphical system, it doesn’t work very well for animation. I mean, even you, I think, implemented
Breakout for us a while ago. COLTON OGDEN: A while back. DAVID MALAN: And tried it out
and, eh, you know, it’s OK, but it’s not compelling. But I’m sure there are games
that would be compelling. COLTON OGDEN: Yeah. I know that in their examples, they
were doing things like Assassin’s Creed Odyssey, you know, very recent games
that are very high graphic quality. I mean, I would like to– I would definitely like to
see at work, if possible. DAVID MALAN: Yeah. No, I think that would be pretty cool. One less thing to buy, too, and it
hopefully lowers the barrier to entry to people. You don’t need the hardware. You don’t need to
connect something else. You don’t need to draw the power for it. I mean, there’s some
upsides here, I think. COLTON OGDEN: I think especially
if they are doing this at scale– and Google already does
this, surely– but, you know, they have a CDN
network that’s very– and it’s very, maybe a
US-centric thing at first, and then can scale it
out to other countries. Maybe the latency between any
given node on their network is either– or the gap is small enough
such that the latency is minimal. DAVID MALAN: Yeah, hopefully. COLTON OGDEN: As long as it’s
less than 13 milliseconds, though. That’s the one in 60– one’s
over 60, which is the– typically, the games are
60 frames per second– that’s the amount of time it
needs to be to process input and feel like it’s a native game. DAVID MALAN: Well, to
be honest, I mean this is similar to their
vision for Chromebooks which, if you’re unfamiliar, is
a relatively low-cost laptop that is kind of locked down. It pretty much gives you
a browser, and that’s it, the presumption being that you can
use things like Gmail and Google Docs and Google Calendar, even partly
offline, if you’re on an airplane, so long as you pre-open them in advance
and sort of cache some of the code. I mean, that works well so
long as you have good internet. But we’ve chatted with some of our
high school students and teachers whose schools use Chromebooks
and it’s not great when the students need to or
want to take the laptops home. Maybe they don’t have or can’t afford
their own internet access at home, so there’s certainly some downsides. But I don’t know. I feel like within enough
years, we’ll be at the point where internet access of some sort
is more commodity like electricity in the wall and so long as you have
that kind of flowing into the house, that it’ll be even more
omnipresent than it is now. COLTON OGDEN: Sure. Yeah that makes total sense. I would definitely
like to see it happen. I hope it does. DAVID MALAN: Yeah, so. COLTON OGDEN: Well, I
think that’s all the topics that we’ve sort of had lined up. We covered a nice sort
of breadth of them. This was great. I like this format. DAVID MALAN: Yeah. No, hopefully you’re
still listening because I feel like we should offer a
couple bits of advice here. I mean, one, on the
Facebook password front, I mean, even I did change my password. I don’t know if mine was among
the millions that were apparently exposed in the clear, and it’s
not clear that any humans noticed or used the password in any way,
but changing your password’s not a bad idea. And as you may recall from CS50,
itself, if you’ve taken the class, you should probably be using a
password manager anyway and not just picking something that’s
pretty easy for you to remember. Better to let software do it instead. And on the robocall front, I, mean
there’s a couple defenses here. I mean, even on my phone, I block
numbers once I realized, wait a minute, I don’t want you calling me. But you can also use
things like Google Voice, right, where they have a feature which
seems a little socially obnoxious where Google will pick up the
phone for you and they will ask the human to say who they are. Then you get, on your phone,
a little preview of who it is, so it’s like your own
personal assistant. COLTON OGDEN: That’s
kind of interesting. I actually didn’t
realize that was thing. DAVID MALAN: It’s an interesting
buffer, but it’s kind of obnoxious. COLTON OGDEN: Yeah. DAVID MALAN: Right, to
have that intermediate. COLTON OGDEN: You could have a
whitelist, surely though, that– DAVID MALAN: For sure. COLTON OGDEN: Yeah. DAVID MALAN: No, so
for unrecognized calls, but people tried rolling this
out for email, though, years ago, and I remember even
being put off by it then. If I email you for the
first time, we’ve never met, you could have an automated
service bounce back and say, oh, before Colton will reply
to this, you need to confirm who you are and click a link, or
something like that. And at least for me
at the time– maybe I was being a little, you know, a little
presumptuous– but it just felt like, ugh, this is, why is the burden being
put on me to solve this problem. COLTON OGDEN: Also, a little
high and mighty, potentially. DAVID MALAN: Yeah. But, I mean, it’s an
interesting software solution, and that’s what Google
Voice and there’s probably other services that do the same thing. So you can look into things like that. And as for like Stadia, I’ll
be curious to try this out when it’s more available. COLTON OGDEN: Yeah, me too. Me too. DAVID MALAN: Yeah. You know, I think it’s worth noting that
podcasting is how CS50 itself ended up online, way back when. Long story short, before
CS50, as Colton knows, I taught a class at Harvard’s Extension
School, the Continuing Ed program, called Computer Science
E-1, Understanding Computers and the Internet. And we, in 2005, I
believe, started podcasting that course, which initially
meant just distributing MP3s, which are audio files, of the lectures. And then, I think one year later,
when the video iPod, of all things, came out, we started distributing videos
in QuickTime format and Flash format, probably. COLTON OGDEN: Definitely MOVs. DAVID MALAN: Yeah, of
the course’s lectures. And it was really for the
convenience of our own students who might be commuting on a train
or maybe they’re on a treadmill, and it was just kind of trying
to make it easier for people to access the course’s content. And, long story short, a
whole bunch of other people who weren’t in the class online took an
interest, found the material valuable. And certainly, these days,
there’s such a proliferation of educational content online. But it was because
that course we started podcasting that when I
took over CS50 in 2007, it just felt natural, at that
point, to make the course’s videos available online as well. And even though we’ve kind of come full
circle now and taken away the video and replaced it just with
audio, I think it really allows us to focus on the
conversation and the ideas without really any distractions of
visuals or need to rely on video. So hopefully, this
opens up possibilities for folks to listen in, as
opposed to having to be rapt attention on a screen. COLTON OGDEN: And what I like is this
is a more current events focused talk, too. Yeah, we have so much
other content, it’s nice to sort of have a discussion on
the things that are relevant in the tech world, or otherwise. You know, it fits this format very well. DAVID MALAN: Yeah, absolutely. Well, so this was Episode
Zero of CS50’s podcast. We hope you’ll join us soon for
Episode 1, our second podcast. Thanks so much for CS50’s own Colton
Ogden, whose idea this has been, and thank you for spearheading. COLTON OGDEN: And thanks, David,
for sort of leading the way here. DAVID MALAN: Yeah, absolutely. Talk to you all soon. COLTON OGDEN: Bye-bye.

28 thoughts on “Robocalls, Facebook Passwords – CS50 Podcast, Ep. 0

  1. Dude weather it's CS50 lectures on edx, CS50x puzzle day, cs50 live or cs50 podcast just keep it up you are bringing very high-quality computer science material and for free!!! so I totally Support you with every events ( I personally loved all the three so far and I'm looking forward for this )

  2. 1. Telegram's self destruct texts in secret chats with a timer. They can't be screenshot.

    2. The delete message on both side feature that was introduced in WhatsApp recently, also have been available in Telegram for a long time.

    How trustworthy the above things are? Is there a way to check the text is deleted (or not stored) on the cloud/server?!

    That's the last month, fb admits Instagram passwords were stored as plain text as well according to last week news!! ๐Ÿ˜๐Ÿ™„๐Ÿ˜

    I don't know how true is this, but TOR user's original ip location could be tracked down (FBI/NSA/CIA one of them claimed that). Thanks for mentioning about Tor in the podcast ๐Ÿ‘๐Ÿ‘

    This is so great. I'll be looking forward to the podcast [1] ๐Ÿ˜…

  3. I remember old DOS times… we had an utility from Norton 'wipe' I think was the name of the command, that would write 0s to the whole disk.

Leave a Reply

Your email address will not be published. Required fields are marked *