The Amazon Echo Dot has a script

The script’s existence is not proof that it is used but the expanded speculation around it is a fun exercise. The only fact I can give about the /bin/ script is that it exists. (For now.)

I can say something different for another script,, on the system. I audited against it with nmap. I enabled features on the Dot (such as using Spotify to open TCP 4070) to test the script’s execution/logic. The ability to audit the script and observe behavior is crucial. The data supports that the script is used. (More would be better!) The images below are part of that audit; TCP 4070 being open after enabling Spotify and then a quick banner grab.

Unfortunately I’m unable to do the same level of observing with the script. I can’t knowingly trigger it, I don’t have a way to image an Amazon Echo Dot, and I don’t have a way to remotely connect and monitor it’s activity. The script appears to create new memdump logs in the /data/system/dropbox directory. I would love to know the fate of these logs and anything else in the /data/system/dropbox directory.

If you want a copy of the script you can download the system at Amazon Echo Update 567200820 (And where to download it!). Discovery of this script and other fun within the system happened late last year/early 2017. It’s been fun. 🙂

It’s worth noting that recently ArsTechnica ran the story of Amazon refusing to hand over data on whether Alexa overheard a murder, which puts a good perspective on information one could get (possibly) from Amazon about an Echo Dot user if they were motivated to do so. It’s a continuation of the involvement of an Echo in a murder case from 2016.

I wish I had more time to work on this system. Unfortunately taking 19 credits this semester has proven to be the challenge I was expecting. It’s something I still give attention but not at the level of intensity I would like. Hopefully this summer I can focus on it quite a bit more.

The Amazon Echo Dot has a script

Amazon Echo Update 567200820 (And where to download it!)

I received an email from Ronald Brakeboer about an update to the Amazon Echo Dot system. He noticed that his unit updated to 567200820. I wasn’t tracking this and unfortunately didn’t have a copy of the update. It was recent so I decided to pick up another unit and hope that it still needed the update. I could then do what I did previously to download the update for analysis. (I could also unplug the unit and keep it in storage to capture future updates as well.)

Sure enough, the plan worked. I’ve posted a screenshot of the capture along with the download URL and some checksums. I hope it helps!


Download & Checksums

This URL once again works with wget. You don’t have to spoof user-agent strings or anything of that nature. 🙂

SHA1(update-kindle-full_biscuit- 824b94a9664cede9eb2f49ab312fcf66857405ca
SHA512(update-kindle-full_biscuit- a771c05054d33b3e53df4c2a63bdd9a9eda7fbadc11217cb8013bbfa712513f239228f093db72960f5577ed983949dcbf65188850052aafd9776c56bccca6d0a

Additional Reading

If you haven’t yet read my Amazon Echo Dot System Image post then check it out. It goes into greater detail as to what I did. Always feel free to email me of course. Thanks!

Amazon Echo Update 567200820 (And where to download it!)

Amazon Echo Dot System Image

My friend sent me a second generation Amazon Echo Dot as a holiday gift. It sounded like a good little opportunity to get the ball rolling on some modeling for RiotPSA. Plus I really wanted one!

I began by capturing Dot network traffic just to see what’s going on. I set up a port mirror on a little 5 port switch to capture all of the Dot traffic. I ended up using tshark for ring buffers. The goal was to capture the initial setup of the Dot and then about an hour of no activity. I would keep the Dot connected but I would not say the Dot trigger word. (The default is Alexa.) After only a few minutes I had captured over 250MB! Something fun was going on.

My suspicion was that the Dot downloaded and applied a system update. This turned out to be true.

Using Wireshark

After the hour of capturing I used Wireshark to do a quick analysis of the data. I gravitate to sorting and filtering. I sorted conversations by bytes in Statistics > Conversations. The majority of IPv4 data (about 270MB) was between the Dot and

wireshark_topconI filtered by right-clicking the conversation and selecting Apply as Filter > Selected > A <-> B. In the Wireshark packet list pane I checked out the traffic and decided to filter on the Stream Index of 29.

tcpstream29A nice, clean conversation. 🙂 There was a successful three-way handshake between the Dot and the remote host ( followed by an HTTP GET request for a file named update-kindle-full_biscuit- (I found out that 564196920 is the software version number from I copied the full request URI from frame 3546 ( and used wget to download the file.

wgetIt worked! I didn’t have to specify a user agent, spoof any information, etc. (And now you have a way to download the system image for analysis as well!)

I wanted pull the same file out of the pcapng file. I exported the stream using right-click > Follow > TCP stream > Show data as RAW (drop-down list) > Save as… and saved it as ~/tcpstream29.raw. This export should include the HTTP GET request. xxd confirms it’s presence.

xxd_tcpstream29_1At this point we just want the file. The offset we’re looking for is easy to pick out due to a pretty recognizable file signature shortly after the ASCII keep-alive. (0x504b0304) A carver would be handy but the PK allows me to eyeball it.

xxd_tcpstream29_2I used a hex editor (Bless) to dump everything before offset 0x243.

bless_tcpstream29It’s dead simple. Highlight what we want to remove and hit delete. I saved it as ~/tcpstream29_edited.raw. This file should be identical to the one I downloaded earlier with wget, which I verified by hashing each and comparing them. In case it helps the hashes are below:

  • SHA1(update-kindle-full_biscuit- e897fef9384220cb60bd6f385c328f57408cd5f5
  • SHA512(update-kindle-full_biscuit- cc92c85e08ce412dfbe14562e8df76cdd600da60d3f9245decabb9d65e92b473d07db11559a8b2ffe56e525d0050245dfd0d2c1d0dd23e47d14dee9dd911b01a


I loaded the file into X-Ways so I could explore, filter, comment, etc. This quickly led to enough information that warrants its own post. Because of that I won’t go over everything here but will instead save it for a separate post. It is worth noting some things that stuck out from the get-go:

  • The Dot runs Android or at least a modified version of it.
  • There’s a bash script for iptables firewall configuration. There is an initial flushing of the tables and a default deny is put in place after. I will look at the rules and scan against them to verify that the script is ran once I enter a more active phase of information gathering.
  • The Dot also implements bluetooth blacklisting to specifically prevent automobiles from automatically pairing with it.
  • The system update comes with two firmware packages for integrated components. I’ll run these through strings, binwalk, and bulk_extractor to see if anything fun comes up.

Again – I’ll go over these items (and more) in follow-up posts.

Thoughts of an Echo Dot compromise

The idea of fully compromised an Amazon Echo Dot crossed my mind. Here are a few thoughts I have.

Alter the system update to suit your goal

Altering the firewall bash script and including binaries in a repacked copy of the update is possible. If the Echo Dot does not require that system images be digitally signed then the system update has a chance at being loaded/ran on an Echo Dot if presented correctly. I don’t know if the system packages are digitally signed of if signing is required. Absence of evidence is not evidence. Right now I just do not know because I haven’t looked.

Trick the Echo Dot into downloading and running a bad update

I believe I traced down the TCP conversation in my capture file that alerted the Echo Dot that an update was available and where to find it. I started at the of the system update download itself and worked backwards using the IP addresses, DNS queries/answers, etc. Unfortunately (or fortunately really) the TCP stream of interest contains an encrypted conversation. I can not see inside it to verify.

Effort and insight would be required to discover how the Echo Dot is specifically told an update exists. More would be needed still to manufacture a way to trick an Echo Dot into downloading and running a system image not created and made available by Amazon. Later on I’ll be using The OWASP Zed Attack Proxy (ZAP) and the Burp Suite to work on this.

A vulnerability in a listening process

The Echo Dot has a firewall with a deliberate set of rules. Bugs are organic to the development process and it’s possible that a permitted, listening process has a vulnerability. Keep in mind that the thoughts above are anything but unique to the Amazon Echo Dot. This is basic stuff.

Why so much has not been done

I’d love to explore the system update more but the spring semester of college just started and my hands are full. Being an older, non-traditional student means that I’m taking classes a little more seriously and likely putting unnecessary pressure on myself. I’m taking 19 credits and need to focus on starting the semester on the right foot!

Hopefully in the coming weeks I’ll have more time and resources to look more into this system and network traffic. Until then I realized that there wasn’t any reason to not share the system update, specifically the download link. I couldn’t find it on Google so I figure it just hasn’t been posted yet.

I’ll be sure to update the blog when I explore the system update and network traffic some more. Thanks!

Amazon Echo Dot System Image

A Sunday with an HDHomeRun CONNECT

Plex is a great piece of software that makes my life better. They recently announced a DVR feature on their beta channel. It requires hardware – In my case an HDTV antenna that takes over the air signals and makes them available on the local network using DLNA. I settled on the SiliconDust HDHomeRun CONNECT unit that Plex officially recommends.

Yet another cheap, unitasking computer on my network. Oh joy.

I unboxed, connected, and configured the unit on Sunday to watch the evening NFL game on NBC. I had some extra time and decided to learn a little more about the unit.

Is it sending/receiving data from outside of my local network?

I have a portable, managed switch I have set up for basic port mirroring/traffic capturing. I used that setup and captured some traffic with tcpdump. I left to get a sandwich and stopped the capture when I returned. I filtered for data sent to or received from an address not on my local subnet.

root@kali:~# tcpdump -nnq -s 0 -r ~/hdhr01.pcap src net ! or dst net ! | less

A few pages of traffic showed up with multicasting and UPnP addresses that weren’t filtered out. I eventually ran into the following conversation:

It’s a TCP conversation between the unit and an outside service over port 80. It’s brief and presumably HTTP. Time to check it out.

Why is it doing this?

Using tcpdump for collection and targeted filtering on packet captures is ideal and effective. We did that and now have address and conversation to focus in on further. I can move forward with tcpdump but prefer to use the Wireshark dissectors. The visual layout and features make exploratory searches on targeted data a strength of Wireshark.

I used the Analyze > Follow > TCP Stream option on the conversation. It’s brief and and easy to look at.

The unit scanned for over the air (OTA) channels that it could pick up and checked with a server of HTTP for a list of corresponding of station logos. It also included other information such as device ID, current firmware version, and local IP address.

If I have some time in the future I can use Scapy to send similar information but with an older firmware reported to see if a response is sent to indicate an update.

Is tHE external address embedded in the firmware?

The device is communicating with and it must get that address from somewhere. If it’s not embedded in the firmware then we would expect a DNS query from the unit. We’d also expect a successful DNS answer in response to the query because we know a TCP connection took place.

root@kali:~# tcpdump -s 0 -l -n -r ~/hdhr01.pcap port 53

Boom. We have both the DNS query and the DNS response that we were looking for. The unit queried for and was sent in response.

At this point I’m glad that I collected the packets the way I did. Filtering while you collect is ideal but if I filtered out all local network packets it would have resulted the omission of this DNS query and answer. Keeping the traffic relatively controlled by using a mirrored port was adequate.

Why .net and not .com?

I noticed that the DNS query is for and not .com. The URL for all of the product and marketing information about has been from the .com domain. This difference is probably a business decision and nothing more. If we were suspicious and we thought the URL was being used maliciously we could do some sleuthing. A starting point would be to find out if they are both owned by the same entity. A whois lookup may help us with that.

The records match in a way which supports that they are owned by the same organization. The Internet Corporation for Assigned Names and Numbers (ICANN) requires that whois information is accurate and up to date. My government also requires that I leave the tags on my mattress. I think you see where this is going. I’ve never read of whois record authenticity being enforced and I’m certain that people lie all the time. So what good are the records then?

If we choose to trust the domain and whois record then we can call the technical contact number provided and ask them if they also own or trust the .net TLD. This is not a complete or perfect solution but if this particular thing bothered you it’s one of many things you could do.

About that firmware…

If the tcpdump filter did not yield the DNS query and answer I would have downloaded the firmware of the unit from the manufacturer’s website and ran stringsbulk_extractor, binwalk, etc. against it. This would be an effort to show that the address was embedded.

Thinking about how things did turn out though – Should I expect to see pop up since it’s the hostname used to check for firmware updates? I would think so but I just can’t support the claim. Yet.

I say that because I walked through the OWASP IoT Firmware Analysis guide with the unit firmware and didn’t have any success. I did test the same guide with a DD-WRT image just to make sure that I wasn’t doing something wrong. It worked. I’ll have to spend some time to figure out what the hold-up is with the SiliconDust firmware. Perhaps I am missing something obvious.

This part quickly became the most fascinating to me. It would be fun to be able to yank that hostname string out of the firmware and have better knowledge to explore future blobs of data where I think a string may be hiding. I’m definitely going to continue with this part.

End of the Day

This was my early Sunday afternoon. It’s nothing hardcore but was still a fun little thing. I did eventually get around to using the hardware to watch some football.

A Sunday with an HDHomeRun CONNECT

Your MacPractice MD Server is Phoning a Friend

I was troubleshooting a MacPractice MD Server the other day for a client. I noticed some strange behavior and decided to look at some network statistics. I found an established connection to an address that resolved to an Updox server. Updox is integrated into MacPractice MD as a paid extra of sorts. You can read more about Updox and their relationship with MacPractice on the Updox website. The fact that the software is phoning a vendor’s business partner instead of the vendor directly is why I have titled this entry as Your MacPractice MD Server Phoning a Friend instead of Your MacPractice MD Server Phoning Home.

The real issue with these connections is that the client I was working for did not pay for that extra feature. They had no contract with Updox and had no business purpose to communicate with them. They had previously expressed interest in using Updox with MacPractice MD for electronic FAX but did not pursue it after checking out the cost. No trial was ever started. Why was this MacPractice MD Server connecting to Updox servers? I met with my contact and explained what I found. I was curious and wanted to find out more. Ultimately this was not the cause or a symptom of the problem which I had been contacted to solve. I didn’t seek written permission to capture and analyze network traffic (I didn’t want to open a can of worms over my own curiosity.) but I did ask for permission to access the Cisco Meraki hardware and logs. The client and network administrator were more than willing to let me look further into this if I wanted to. Hell yes I did.

The connections from the MacPractice MD Server to Updox happened regularly and were over port 443 to That IP address resolved to Although I did not capture and dissect any network traffic I was able to verify that the IP address was listening for https on port 443 by establishing a connection with my own Mac. It was expected but worth verifying. In a 24 hour period the Meraki Security Appliance logged 1440 flows between the MacPractice MD Server and this IP address. (Note: A flow is defined by the firewall as one connection socket. The number 1440 reeks of automation. It also seems excessive. Over the 24 hour period a total of 5.5MB was transferred between these two machines and about 2.5MB of that was egress traffic. Not a whole lot of action. With 100% certainty I can say that the MacPractice MD Server makes these connections. Any assumptions beyond that are just those – Assumptions.

I would love to set up a test MacPractice MD Server with ZAP or Burp Proxy to have a better idea of what’s going on inside of those connections. I’m unable to test any hypothesis I may have about this until I’m essentially gifted a MacPractice MD Server license. Ideally I would be able to research this issue to either confirm my hunch that this traffic is benign or to be gifted with a juicy surprise. I’ve ragged on MacPractice before for a handful of reasons. Nevertheless it would be fun to find out what exactly is going on and potentially fix an issue. Even if it’s just for tidiness.

Your MacPractice MD Server is Phoning a Friend

OS X’s tcpdump

I was messing around with non-recursive DNS queries today and noticed that the version of tcpdump that comes with OS X is special. (Why wouldn’t it be?) It’s compiled with a -P flag which allows it to save pcap-ng files as opposed to pcap. Pretty nifty. In the man page the following is said about the hook:

-P     Use the pcap-ng file format when saving files. Apple modification.

Pretty nifty!

OS X’s tcpdump

OS X Sockets and Their Processes

How do we list sockets with the process names and PIDs that occupy them in OS X?

Imagine that our goal is to get a list of listening TCP sockets on OS X and the process names/PIDs that are using them. How would we go about displaying this data? We can start by asking ourselves how we would do it in Linux.

Using ss

In Debian GNU/Linux we can use the ss command with the -n (do not resolve service names) -l (display listening sockets) -p (display process using socket) and -t (display only TCP sockets) flags.

root@debian:~# ss -nlpt

This gives us the output that we are looking for. Once we move to OS  X we discover that the ss command is not available. We always want to live off the land and not install anything special to gather our data. Using ss is not an option and we move on to netstat.

Using netstat

Although netstat and ss are different programs the flags in this next example have the same meaning. (As we’ll see this is simply not true for the netstat build included in OS X.) Use the following command/flag combination in Linux.

root@debian:~# netstat -nlpt

Like ss, netstat on Linux gives us the output we are looking for. Even better is that OS X comes with netstat. Let’s try that same command/flag combination in OS X.

osx:~ root# netstat -nlpt
netstat: t: unknown or uninstrumented protocol

The OS X build of netstat has flags that are different than what we expect on Linux. Some flags have different functions while some functions simply do not exist. Notably here the -p flag does not display the process name and PID in OS X. Instead it is used to specify a protocol to filter. The error above is the result of us specifying the protocol t, which does not exist. (Rearranging our flags will give us a different error but the cause will continue to be our -p flag not being provided a protocol to filter.) The real bad news comes when we read the OS X netstat man page and find out that there is no flag to display the process information that we are looking for.

This is an obstacle for us but not one that we can’t overcome. We can move on to lsof.

Using lsof

We can use lsof to do a lot of things, one of which is to show socket information. In fact we can use the right flags to display the exact information we are looking for.

Here’s the command and flags that I tend to use in OS X. Run it in both Linux and OS X to verify that the application behaves identically on each system.

root@debian:~# lsof -nPl -iTCP -sTCP:LISTEN
osx:~ root# lsof -nPl -iTCP -sTCP:LISTEN

The command works and we get the identical output formats in both Linux and OS X.

We can display and collect the information we are looking for.

Too Long; Didn’t Read

Use the following command/flag combination in OS X to list listening TCP sockets with the process name and PID associated with them.

osx:~ root# lsof -nPl -iTCP -sTCP:LISTEN

You can check out the lsof man page to change the display options and filters as necessary. Many options are available and piping any output into grep may provide additional granularity.

Do you know of a better way?

Let me know! I’d love to know of any better or different ways to gather this kind of data in OS X.

OS X Sockets and Their Processes