Not your typical Friday night

Before the world was turned upside-down, I had a fairly interesting experience the evening of March 6th, 2020. Now that Zoho, the maker of Manage Engine Desktop Central, has written about it, I guess it's safe for me to do so as well.

Things started out pretty standard... I sat in a parking lot waiting for my wife to get out of the store. I was doing what I often do in my downtime which is read my curated, cybersecurity-heavy news feed. One article from ZDNet stood out like a sore thumb.

Zoho zero-day published on Twitter

https://www.zdnet.com/article/zoho-zero-day-published-on-twitter/

I looked at other news sites to verify the info. Sure enough, Bleeping Computer and others were writing about it too.

Zoho Fixes No-Auth RCE Zero-Day in ManageEngine Desktop Central

https://www.bleepingcomputer.com/news/security/zoho-fixes-no-auth-rce-zero-day-in-manageengine-desktop-central/

Why does this matter?

I'm not going to argue whether the researcher who published the info was right or wrong. Instead, let's talk about the vulnerability and the aftermath. A 0-day, remote code execution (RCE) is the worst of the worst cybersecurity flaw. The only way you could possibly classify it as even more abysmal is when it happens in software like Manage Engine Desktop Central. Desktop Central is used by thousands of organizations around the globe to control computers and deploy software. Aside from the vulnerability, there is a proof-of-concept which makes it trivial to exploit and attackers have easily downloadable list of targets via Shodan? Holy &@$#! Not to mention the news is getting released at the worst time, i.e. Friday evening when a majority of IT staff have went home for the evening or maybe even for the weekend. This is bad, really bad on so many different levels. Especially when you consider how ransomware and criminal organizations are targeting these exact types of systems in recent news.

Who's Affected?

My first step was to download the Shodan list in csv format and manually parse through the data to verify none of our MSPs or IT departments were on the list. So far, so good!

I decided to look at 10 or so sample servers just to see how IT and security teams were responding to this issue. Out of the sample sites I visited, literally every single one was out-of-date. More importantly, they were sitting ducks until they applied the patch. And no, I didn't run the exploit to verify this. It was as easy as visiting the login page as shown below.

I didn't have hostnames for a majority of the entries and the organization/location fields were pretty worthless because they reflected where the systems were hosted, not who they belonged to. I did, however, have IP addresses for each of the affected systems. I manually input those IP addresses (and ports) into the browser. From there, I was able to do a "reverse lookup" of the SSL common name as shown below.

I didn't see a bunch of IT providers and MSPs in my small sample size as I originally thought I would. However, it didn't take long to realize several of organizations were what I consider critical -- schools/universities, public works, hospitals, banks, etc.

Well, it became pretty clear what I needed to do. No, this isn't exactly how I planned to spend my Friday evening (and eventual weekend), but what the heck?

First thing I needed to do was expedite this process. I limited my data/search to the US. Why *just* the US? It's not that I don't care about others around the world, but I knew I could only do so much. Also, I sadly don't know any foreign languages and my office phone doesn't dial internationally even if I did. Next, I concatenated the IP and port fields in the csv file and exported it to a text file. I then wrote a really simple bash script to go through the IP address and port list while inserting the value into a curl command similar to the one below. I redirected that command output to another text file so I could go down the list one-by-one.

curl --insecure -v https://<insert IP address & port> 2>&1 | awk 'BEGIN { cert=0 } /^\* SSL connection/ { cert=1 } /^\*/ { if (cert) print }' | grep "common name:"
*       common name: <returned name>

First Contact... And Second... And Third

Once again, I specifically looked for critical infrastructure based on the domain names. I visited each website looking for contact information... And I started calling. To say I had some awkward phone conversations is to put it mildly. I've had "odd" conversations in the past with organizations to let them know they were already compromised, but I think this was my first attempt at being proactive. At one point an overnight messaging service hung up on me because they thought I was nuts... And I really couldn't blame them.

I was pulling out all of the stops. Phone calls didn't work for everyone since some went straight to voicemail. I was using website contact forms, emailing possible contacts based on OSINT-obtained email formats, messaging non-connected LinkedIn executives based on website org charts, etc. After the first few I decided I should have a "form message" of some sort so I hurriedly put the message below together and started using it across the various platforms. Meanwhile, I was receiving callbacks from people I left messages with. My wife even came in a couple of times to check on me to see what the fuss was about and find out who I was talking to at this hour of the night.

<name here>, you have a server on the internet that is highly vulnerable to a 0-day vulnerability. I discovered your server with a Shodan search so there is a high likelihood you will get targeted. You need to have your IT staff upgrade it immediately.

https://<domain name>:8443/configurations.do

I don't know exactly what time it was when I finally shut the madness down, but I was mentally exhausted and I realized whatever dent I was able to make, I wasn't going to save the world. Still, it was tough to sleep. The craziness picked by up the next day when I started receiving messages from different folks and organizations who I contacted via "alternate" means. I even ended up with some Twitter messages, which was interesting because I didn't remember contacting anyone via Twitter. Maybe that meant some people looked me up and thought, maybe this guy isn't completely off his rocker?!?! 😉

I know the messaging got through to organizations. Do I know if my efforts stopped anything? No. It's awfully hard to prove either way. What I do know is that attackers were starting to leverage the vulnerability no later than Sunday so my instincts were right. Unfortunately, one of the organizations I reached out to contacted me back to say,

Guess who got pwned Sunday morning? Yep. Someone was already poking at the Shodan list. They weren't very elaborate in their attack whoever they were. Didn't try to spread via ME as would be the expected outcome of taking over the ME panel.

Yeah, sometimes you wonder what else you could have done...

Hey you!

The cool part is that someone else took note as well... None other than Zoho/ManageEngine! One of the customers forwarded this email to me several weeks later and asked if that was me. My answer was honestly, "I don't know." Why? It is entirely possible another "white hat hacker" did the same thing I did. What I did wasn't exactly rocket science and there are a *lot* of people in my circle of friends/colleagues who would have done the same thing.

Takeaways

  1. Manage Engine apparently doesn't have the best track record with security researchers. Does this experience change that? Maybe? Hopefully? At any rate, they apparently saw the need to reach out to customers after hearing someone else was doing it for them. If and when a new 0-day vulnerability is released in the future, I'm quite happy knowing I may have played a small part in changing THEIR operating procedures.
  2. As I mentioned, this wasn't my first rodeo "cold-calling" organizations and I know for a fact I'm not the only one who has notified organizations of breaches, security incidents, bugs, etc. The question is, "Does your organization have a way to handle 3rd party calls?" What would you if you received a call from someone you have no affiliation with and they tell you that you have servers extremely vulnerable to a 0-day exploit? I remember giving my 30 second spiel to the operator at one organization and their return question was "Are you are a provider or a patient?" <Sigh>
  3. Taking a step back, does your organization have any easy way for someone to notify you to begin with? For example, I would love to have automated an email to security@<domain name> for 2000+ orgs around the globe, but I doubt that would work very well. In fact, I used it as a last resort in many of my efforts and every single email bounced back to me. If we as an industry can come up with better ways to notify others and raise alarms, I'm all ears.
  4. Finally, if someone tells you that you can't make a difference as one person, tell them to go pound sand! 😉

Need help securing your business? Please keep TreeTop Security and the Peak platform in mind for a better approach to small business cybersecurity. We provide cybersecurity piece of mind for small businesses.

-- Dallas Haselhorst

This story was originally posted on LinkedIn.
https://www.linkedin.com/pulse/your-typical-friday-night-dallas-haselhorst