This time I chose to take a look at Microsoft's XML Core Services Information Disclosure Vulnerability (CVE-2017-0022), which allows a malicious web site loaded in Internet Explorer to determine
whether specific executable modules or applications are present on
user's system or not. It has also been exploited in the wild as some exploit kits (Astrum, Neutrino) were using it to fingerprint victim's machines. Although an official fix has already been released in March, what made it interesting was that it was a logical vulnerability while so far, our team mostly patched memory corruptions.
Based on TrendLabs Security Intelligence Blog I managed to reproduce the PoC. The PoC (image below) checks for the existence of files by calling XMLParser::LoadDTD with a binary file resource URL as input parameter. If the resource such as version info is found the vulnerable XMLParser::LoadDTD tries to parse it as a DTD but fails and returns 0x80070485 because version info is not a DTD. If the binary file or the resource within does not exist 0x80004005 is returned.
There were enough pointers in the report to easily spot the vulnerable function XMLParser::LoadDTD and make a diff between the vulnerable (left) and the patched version (right)
TrendLabs' report reads as if removing the check if (v3>=0) alone fixed CVE-2017-0022 so I located this check in assembly above
test edi, edi jl short loc_728C183C
and created a micropatch that simply jumps over it. However, that did not change the outcome of the PoC. With my micropatch applied, the PoC still returned errorCode 0x80070485 for non-existing files. I then debugged Internet Explorer on a patched machine and found that function IsDownloadExternal following the skipped if also behaves differently. Diffing code graphs for this function showed that quite some changes had been introduced to its implementation:
The gray conditional blocks have been added as part of the official patch. After adding this logic to my micropatch for the vulnerable msxml3.dll, the vulnerability was finally neutralized.
The micropatch has been published and distributed to all installed 0patch Agents. If you want to see it in action, check the video below.
Black box analysis of a logical vulnerability like this one can turn out to be quite challenging because unlike memory corruptions, no exceptions are thrown that could be caught during debugging and help pinpoint the culprit. But it all turned out well and the released micropatch is a good proof of that.
Whether Vendors Patch Their Products or Not, We Have Your Back
by Mitja Kolsek, the 0patch Team
Three days ago, Cisco Talos published a post about a code execution vulnerability in LabVIEW, whereby opening a malformed VI file with LabVIEW results in writing NULL bytes at chosen memory locations. This can most likely be used for executing arbitrary code by carefully placing NULLs in various data structures or stack. Nothing unusual so far.
According to Talos' post, the producer of LabVIEW, National Instruments, initially* refused to patch this vulnerability, stating that "National Instruments does not consider that this issue constitutes a vulnerability in their
product, since any .exe like file format can be modified to replace
legitimate content with malicious."
A VI file is not a Windows executable that would run on any Windows computer. However, if you have LabVIEW installed, a VI file will get opened by it, and can be made to automatically run its embedded code. This code is very powerful and by design has ability to access your file system and launch native executables. So a malicious VI file, say, received via email or found on the Internet, could attack your computer if opened in LabVIEW - even without the vulnerability described here.
This is not entirely different from, say, a Microsoft Word document, which is also not an executable file, but can contain powerful damaging macros. (Although Word does warn you about macros and you have to explicitly allow their execution.)
National Instruments provides Security Best Practices stating that you should exercise the same precautions with a VI file as you would with a EXE or DLL file. This makes sense - if an attacker can get you to open his malicious VI file, he can simply put malicious VI code in it that will attack you, just as if he could get you to open a malicious EXE. Importantly, he does not gain any additional benefit from a memory corruption issue described here, as he would still need you to open his VI file - and in contrast to Word and macros, LabVIEW does not ask your permission to execute VI code.
However, the Security Best Practices document further states that if you want to safely inspect a suspect VI before running it, you should add that VI as a sub-VI to a blank VI, and inspect its code before running it.
In this case, however, there is a difference between a legitimately-formatted VI with malicious VI code (which does not get executed as a sub-VI) and a malformed VI causing memory corruption when loaded (which executes malicious code even if loaded as a sub-VI).
This vulnerability therefore allows an attacker to mount an attack with a malicious VI file against a user following National Instruments' Security Best Practices. Since the vendor initially stated that they would not issue a fix (it's still not available at the time of this writing), we decided to make one ourselves.
Analysis
In order to fix this vulnerability, we needed to first understand it. We started with a sample VI file.
A .VI file (example shown above) is a data file in a publicly undocumented format. It gets opened with labview.exe, which, among other things, parses the file's RSRC segments into in-memory RSRC data structures. You can see one RSRC segment at the beginning of the file above, but there can be others further down in a file.
Talos' detailed vulnerability report provided useful details on where their malformed .VI file caused a crash. Apparently, a method called ClearAllDataHdls (yes, the affected DLL comes with some symbols) walks through an array of what we can assume are "data handles". Each data handle has an offset to its own array of some 20-byte objects, and the count of these objects. The code simply walks through all objects of all handles, and writes a NULL to each one of them. Manipulating the said offset allows for writing one or more NULLs at arbitrarily chosen locations in memory.
It was trivial to create a malformed .VI file from a sample file based on this information. And, as expected, it crashed LabVIEW with an access violation. However, it did not crash it in ClearAllDataHdls, but in a method called StandardizeAndSanityChkRsrcMap (actually in a small helper function called by it). What happened? Was our POC different, did we find another bug?
It turned out we were using LabVIEW 2017, while Talos did their testing on version 2016. It appears that in version 2017, LabVIEW added some RSRC sanitization code, and in fact looking at this method revealed some sanity checks are being done on the RSRC data, whereby a .VI file is rejected if these checks fail. Unfortunately, these checks are not for the malformed data in question; in fact, StandardizeAndSanityChkRsrcMap also performs initialization of above-mentioned 20-byte objects by reversing their byte order to little-endian format, and this very action is what resulted in our crash due to accessing an invalid memory address.
It was time to take a closer look at StandardizeAndSanityChkRsrcMap and understand the RSRC data structure. The following image shows the most important part of StandardizeAndSanityChkRsrcMap, where the outer loop walks through all the handles, and the inner loop walks through all objects of a given handle and byte-reverses them.
Now let's look at a sample RSRC structure in the memory, after all the values have been byte-reversed.
The structure begins with a 30h-byte header (purple), followed by a DWORD structure length (blue), which is the size of the entire structure as shown - in our case 338h. After that, a DWORD handle count (green), 17h, tells us that there are 23 handles in the handle array that follows (red). Each handle consists of three DWORDs: some seemingly user-readable keyword, count of handle's objects (subtracted by 1, so 0 means 1 object), and the offset of its first object; the offset is meant from the handle count (green). Finally, the rest of the structure is object data area (black). Each object takes 20 bytes, and if a handle has n objects, they take n * 20 consecutive bytes at the specified offset.
Clearly, a valid RSRC structure would have all handles' objects located neatly inside the object data area. But a malformed RSRC structure can specify an arbitrary offset, and thus tamper with chosen memory locations.
Patching
Our goal at this point was to add the missing sanity check to the original code: we should not allow accessing any object data outside the object data area.
We needed to find a good location for injecting the patch, and we chose one right after a handle's offset is obtained, at which point we had all information available to implement the sanitiy check. The following image shows the location of our patch.
We have the following information available at the patch injection point:
esi holds the offset of the current handle's first object
dword [ebp+10h] holds the number of objects for this handle (reduced by 1)
dword [ebp-4] holds the address of the handle count value, which is right next to the structure length value in memory.
The existing sanitization code exits the function with return value 6 (in eax) when the existing sanity checks fail, indicating to the caller that the structure is invalid. When this happens, LabVIEW tells the user that the file is invalid. We decided to do the same in our sanity check.
In pseudo-code, this is what we needed to do:
if offset of the current handle is negative or ridiculously large, we return with error 6
if the number of objects for the current handle is negative or ridiculously large, we return with error 6
multiply the number of objects with 20 to get the size of the object array
add offset to the size of object array to get the offset immediately after the array
calculate the maximum allowed offset by subtracting 34h (offset of handle count) from the structure length
if the last byte of object array is beyond the maximum allowed offset, return with error 6
; esi is offset of the handle's object data test esi, 0FFF00000h ; is offset negative or too huge? jnz error ; if so, exit with error
mov eax, dword [ebp+10h] ; eax = number of objects in this handle (-1) inc eax ; eax = actual number of objects in this handle test eax, 0FFF00000h ; is number of objects negative or too huge? jnz error
imul eax, 14h ; size of object data for this handle ; (1 object is 14h bytes) add eax, esi ; eax = offset right after this handle's ; last object
mov edx, dword [ebp-4] ; stored address of handles_num mov edx, [edx-4] ; structure length is stored right before ; handles_num sub edx, 34h ; edx is the maximum allowed offset cmp eax, edx ; are we out of bounds? jg error ; if so, exit with error
jmp continue
error: call PIT_ExploitBlocked jmp PIT_0x30c09 ; jmp to epilogue with error code 6
continue:
code_end patchlet_end
Our micropatch has been published and distributed to all installed 0patch Agents yesterday (two days after Talos published vulnerability details), and you can see it in action in this video.
The benefits of micropatching
This story is a common one: a software vendor creates a product, many users use it, then someone finds a vulnerability. The vendor is notified but it's expensive for them to create and distribute a patch outside their schedule. Even with an updating mechanism in place, the so-called "fat updates" (updates that replace huge chunks of the product) are risky; many things can go wrong and expensive full-blown testing has to be done. And then the update has to be delivered to users, who have to waste their precious time with updating. And all that just for a single vulnerability? Understandably, vendors are inclined to try postponing such unwanted updates and bundle them with scheduled ones, often buying their time by downplaying the issue. When that happens, the security community likes to drop the details ("hey, if the vendor says it's not an issue, there's no harm in publishing"), and that usually pushes the vendor to issue a fix after all. They do it under pressure, and the risk of error is higher than usual. Finally, since un-updating is not really a thing, a botched fix could mean a nightmare for users to just get back to the vulnerable functional state.
In contrast, in-memory micropatching can fix a vulnerability with minimal and extremely controlled code modification (usually a dozen or so machine instructions) with no unwanted side effects. In addition, a micropatch can be applied to a product instantly, while the product is running, and just as instantly removed if suspected to be causing problems. All this allows the testing to be less rigorous, and only focused on the modified code - therefore cheaper.
Now imagine National Instruments had micropatching integrated in LabVIEW. It would be inexpensive to create and distribute a highly reliable micropatch for a vulnerability like this - especially with their intimate knowledge of the product -, and they could stay on their original release schedule while users would get their LabVIEW installations micropatched without even knowing it. No PR mess, no unhappy users, and very little disruption of business. What's not to like?
Software vendors are welcome to approach us about saving money, grief, and their users' time with micropatching.
If you have 0patch Agent
installed (it's free!), this micropatch is already on your computer and
is getting automatically applied whenever you launch LabVIEW 2017.