Horrible Anti-Virus

WORKFLOW PERSPECTIVES FOR EFFECTIVE MALWARE REVERSING (BASICALLY – RANTS from yeeeears ago…)

All malware reversers must at some point have faced with a gear lust of sorts; another pan domain time-eating brain slug of sorts. I would like to address the commercial variety not the underground ones as the latter seem to have made the successful crossover by not having options that are pennywise and therefore resort to stealing, thus earning their sinful inventories. Of the plethora of available reversing utilities much of best of erstwhile and contemporary ones are home-brewed. Ergo, the bad ones don’t have the disease, au contraire, they spread it, both the pain and the pleasure.

I will be discussing a few options that are gleaned from my experiences working in a “malware lab”. I ameliorate on the wisdom thus acquired. The rest of this article will explain why I find certain approaches more effective than others as well as successful instances of toolkit configuration and issues resolved as a result of taking such action. I try to focus more on the Windows platform as the core techniques of reversing are platform agnostic.

So what do you need to reverse malware? Judging more by the type of executables than the payload variety –

1. You also will need tools to identify ,extract and edit information from the malware executables themselves including execution essentials like the instructions embedded in the executables as well as data structures, library or OS function calls and custom algorithms among others.

2. You need tools to investigate as well as intercept and/or subvert the communications undertaken by OS layers w.r.t. the malicious executable and extraneous malicious components; including any system deigietic transactions at various levels, even hardware if required. Essentially this boils down to Ring 0/Ring 3 interactions analysis in Windows parlance.

3. Finally you should able to carry out your agenda with minimum surface for counter attack by the malware.

Essentially, the tripod feature of such a toolkit needs to do thorough static and dynamic analysis in a controlled environment with minimum collateral damage.

Much of any reversing activity has to do with interception. Anything that can be intercepted will be used against the target. Stealth and covert are the primary weapons in all kinds of warfare. One of the premier actions during any real world war involves counter intercepting enemy communications or even destroying the foe’s communications framework in order to incapacitate the same. It’s a timeless concept that defies any sort of trend. Its roots are ­­­purely human though and as long as we have skull-duggery and treacherousness in our natures, malware will always use such metaphors in their design regardless of the underlying implementations, which themselves are prone to getting obsolete in the breakneck speed of technological progress. In pretty much anything communication is king. To emphasize the same, after we survive the ordeal of human birth, the reassuring touch of humans are essential for survival of the newborn baby, the absence of which will most certainly result in death. It is that important. But I digress.

In reversing (from here on meaning software reversing), any form of interception will generate raw data from which information is to be extracted, then proceeding with data mining which should give you the much needed intelligence. When dealing with malware binaries, until recently, the file size itself was never a limiting factor in terms of analysis. Malwares were optimized to death thus minimizing the filesystem or memory footprint, and conversely aiding in manageable analysis. Earlier forms of virii and worms were analysed leisurely over a period of days or weeks to get complete coverage of the code and thus giving better precision in analysis excursions. Much of the work was done manually in a text editor and earlier forms of rudimentary debuggers. That era is long gone and these days it is just not practical to get 100% coverage of 50,000 malware per day, even with team sizes as large as 30. So it’s a numbers game now. You have to dissect the sample set first and select priority samples based on various factors, but none more important than the current attack footprint. However esoteric the malware samples, it’s the ones that cause the most real world damage that typically get the most attention. It makes a lot of sense in terms of the business part of it. The party who comes out with the cure the fastest reaps the most profits, of various kinds including better press. For you as a contributing member, your analysis must be quick and surgical, requiring the least amount of time taken to get the most amount of information from such a sample.

It’s usually quite easy to infer who is a novice and who is an adept just by observing their modes of working. Most novices take solace in the fact that if the existing toolkit does not detect anything suspicious, their job is done. So as a safety net it’s normal for them folks to utilize pretty much every tool on the planet to do their investigation for them. In this case the confidence of their analysis is proportional to the number of tools that have examined their assignments. Many don’t do a real thorough job even with such an arsenal at their disposal. That means none of the elements in their toolkit are used to a degree wherein their efficacy really shines through. Pareto’s principle cannot apply here as if really 80% of results are brought about by 20% of effort; it seems that not even half of that is being directed well. It’s obvious that this should give incentive to analysts to really know their tool first, much before the real world analysis begins. Of course trial by fire method also works, and getting weather bitten in the analysis factory is a good way to stay sharp and learn from your environment and its occupants. Suit yourself.

I personally use a hex editor first over any sample, regardless where it comes from or what it says on the label. Filetype signatures can be forged, data can be appended or hidden, the file may be damaged but repairable and none of these factors might come out in bold during the tool based examination rounds. Every reverser has made at least one variety of hex-editor. If you still have not, please do. This one little task immediately qualifies you as a hex-head. Oh, and then the next step is to actually use it! It is really difficult for a non running file lying plain in the filesystem that is represented byte-by-byte to obfuscate the obvious. In fact any anomalies really shine through this stellar utility. It just takes a little bit of patience and training discipline and a keen eye for detail. The last part is evolutionary advantage we have already, but sometimes we forget to make use of it. In this essential reversing activity of file format deciphering it is pretty much use it or lose it. Learn the header offsets by heart. In most 32 bit malware, the bulk of all malware samples currently, the greatest data type is the 4 byte dword. What we have as larger data types are merely arrangements of the previous (or lesser sized primitive types) in a contextual manner giving what we call data structures or merely containers of grouped data.

Cursorily, say you have the MZ header, the PE header, itself containing the File Header and the Optional Header; what you typically want to do in you hex editor is to read off the bat the various values required of a PE file. Well, the answer to the above is just rote practice. It’s a lot like practising scales in piano lessons, later on your fingers just flow! The best way is to just start individual comparing and studying the anomalies or the most important parts of the headers first. As malwares have more anomalies than most normal shareware, you will get plenty of time to do your own study. After a short while you will find hex editors are your best friend. I like PEiD/RDG but it’s been quite a while, and my toolkit does not have them, save for the legacy tools section which I rever, but rarely incorporate. You do this and you have done yourself a great favour by removing any dependency. Much of packer related info that these tools display is not too useful in my own sessions as I have a set way of dealing with pretty much any kind of compression or obfuscation. Just knowing that its FSG or THEMIDA is not useful on its own and if you already know how to unpack them, then you don’t really require any 3rd party utility for manual identification 😉 But I’ll delve into packing and unpacking in another article as they are not in the scope of the current article, so I’ll pause at that, for now.

Also after much practice, I suggest it’s better if you start incorporating automation more and more. Say you know how to read an entire PE file structure, the sections, the overlay etc and are comfortable in detecting most kinds of file related anomalies that are well known, you could try to write a script or make an application that displays the data in a way that complements your analysis style. Many hex editors also have a template or file structure feature which can be exploited to the fullest. Also another thing is file carving, fundamental malware forensics, file rebuilding etc are done beautifully in hex editors. Do explore them. You will develop a style of your own in time, so look forward to it! After you do this for one format, try it on others for sake of variety. It’s like practicing jazz, then going for an electronic music production session and listening to classical themes from OSTs at night. It invigorates your sense of understanding and builds mutually complementing expertise over time. Check out a tip from Malcolm Gladwell’s book OUTLIERS, where he states that 10,000 hours are required for mastery in any endeavour. Interesting man-hour deduction, that is.

Next up, the tool I find very useful after my cursory /detailed session with my hex editor is dynamic analysis toolkit. Many a times I simply skip this part if I know exactly what I am looking for. Quite a few of you who initially started in software cracking, may reminisce that one of the first things to do with a potential target is to first execute it and see how it runs and what activities can be initially observed. Also read the accompanying manual at that. Well, this is malware and normally you won’t get any manual of sorts and many execution scenarios have no visual indication of a program running. What you might get is a lot of surprises and systems malfunction if you are unlucky. So I find it’s far safer to first analyse the targets file structure and conformity along with a host of other information that comes out in a hex dump, BEFORE any executions. Typically it’s essential that I use a sandboxing layer of some sort. Much of the advances in the domain of virtualization have negated the use of hardware machines requiring constant monitoring and backups. You now have got used to snapshots and quick reverts. Time is malleable it seems when it comes to computing. I use sandboxie for quick user mode analysis and vmware for kernel stuff, typically started in debugging mode, running the most bare bones setup of XP SP2, networking enabled on a dirty line or a simulated one.

This phase is very important if I want to find out the overall scope of its malicious activity by data mining the execution generated data. For windows binaries much of that intelligence comes from API patterns, API categories based on function and intent, registry and filesystem access, network related captures, threads created, and finally trademark payloads + creative methods used like process hollowing and armor layers used among others. For kernel mode, the data generated can be very large but you can be more focussed then. You typically would look for inconsistencies in one of the descriptor tables, look for the various kinds of kernel hooking, check the different exception handlers – structured/vectored, check the file and memory image mismatches, check the drivers loaded, the processes started, the services list, check for particular anomalies like resource leaks and file crash points/causes as well as the stack parameters among others. Finally you analyse the data and come to a conclusion about the malicious mechanism and the payloads of the target malware. The trend of analyzing the layers is to build a picture that describes the scenario best without getting too perfect on every opcode (nothing is stopping you though!).

I setup vmware with different Windows versions installed along with an image installed copy of WINDDK. This would be the target, then create the named pipes and start windbg on the host machine. After that the guest would break, and thereon your analysis session has started. Symbolic breakpoints and exception handler enabling/disabling are one of the important features I use in windbg. The rest proceeds as per the need of the hour. If you are comfortable over time, you can incorporate windbg for your user mode sessions as well.

The incorporation of the Sysinternals suite cannot be overstated. It will make your analysis session very effective especially for user mode stuff and general system monitoring. Very useful for memory related investigations as well, along with network data analysis and startup related investigations and the multipurpose utilities process monitor+process explorer.

So currently my setup looks like this : Hex editor + VMWare + WINDDK+Sysinternals.

It already seems like its building on, right? In this case I think the encapsulation concept really works. The only real tool installation in terms of an analysis step is your hex editor. The virtual machine is the analysis environment so that is implicit, WinDDK is already installed at both the host and the guest OS’s and are running. Process explorer is already running regardless of whether I am analysing or not, and is set to replace the task manager.

I find it especially useful to set up both windbg instances in both the host and the guest. Much of the memory, crash dump and kernel analysis can be done in windbg and I really recommend learning this tool really well. Infact I find myself leaning less and less towards other focussed tools like Volatility/moonsols/memoryze etc because everything I need I get it through a combination of commands in windbg.

The revised view of the setup now looks like this :

Host[Windbg] àVMWare[Windbg + Procmon + TcpView + Process Explorer] + Hex Editor

The final tool that completes my setup is IDAPro. Enough has been said about this awesome software and many have moved on from excellent disassemblers like OllyDbg to this. The integrated debugging in IDA along with the windbg console input integration is something I find very useful. As I am less focussed on exploits and more on malware types I tend to use less of user mode debuggers like Immunity/OllyDbg, unless it’s a very specific use I cannot do without like IAT rebuilding and dump reconstruction of user mode malware. Indeed the more you spend time with internal data structures the more comfortable you will get. User mode stuff is more akin to mobile malware where the isolation removes a lot of analysis effort and boils down to an anomaly based search and find session, plus crackme style cryptography exercises. Of course the bar and analysis complexity is being raised all the time. We can only expect things to get more interesting. In terms of x86/x64 based malware, IDA does its job well and the most interesting things are quite straightforward to use once you know what are the essential steps and where to look for what, and then oodles of patience.

The final view of the setup now looks like this :

Host[Windbg + IDA] àVMWare[Windbg + Procmon + TcpView + Process Explorer+ Hex Editor + IDA]

 

The only thing I find missing are my own tools (apart from the hex editor), well there is no limit to that. I do use my own dashboard made in C# .NET that effectively gives me a one click access to everything I need within a few pixels. My own version of a hex editor gets me all the info I need. It is always good to get into the habit of tool engineering and automation, it’s makes a problem solver out of you and if you are like me and hate the commandline this might just be more fun on the designing aspect of it and you could incorporate more features and replace the same old windows paradigms with more visually appealing interfaces. It is what you work with all day, so this investment is definitely worth it. What I have at the end of the day is a very tight kit of regular reversing tools that are customized for my analysis needs. I maximize on the coverage and focus on detail, helps me everytime.

I tend to use less scripting and focus on application specific needs that can be solidified into a binary where executable models can be very well followed, thus I find incorporating multithreading , using visualization and essential language constructs like events and delegates to the max. Pretty much like an SDLC process. Its hones your skills and seeing your own binary is always a joy. Much of traditional reversing and the limitations of current crop of tools have to do with binary related reversing. If you had the source code for everything what is the point of inventing a disassembler? That being the challenge is also an excellent way to understand how to work around these limitations. Automation only does what we tell it to do. It is better to automate something that is rote and leave the AI business to humans. I do agree on many levels with Roger Penrose’s treatise (The emperors new mind) on the fallibility of AI and its essential impossible premise of replicating the human mind. Certainly its nowhere even near right now. Much of it is probability models and solution space analysis, which is enhanced data mining techniques. Certainly in terms of malware analysis automation, a lot that can be automated is infact dataset optimization for further data mining to be done finally by humans. In a typical automation scenario, most of the data is gleaned using from execution traces of the malware sample as well as logging of any system related interactions like use of network layers, dropping a file, installing a driver, using the registry to store binaries, setting autostart points in the OS settings, launching services etc are among the more prevalent ones. But this is a more by the numbers bookkeeping and would likely need a lot of filtering to be done later on. At the executable level something that I find very useful is the address ranges related analyses, wherein the EIP can be traced and various memory mapped regions can be demarcated as either being written over or decompressing /decrypting, use of tunnelling through system apis address ranges, process hollowing are among the ones that I find without much effort just keeping an eye out for odd jumps or indirect entry into unexpected regions. Now I do it quickly by making good use of conditional breakpoints and executable state awareness. If the decryption of layers are not over yet it’s just fine to keep going till you hit the sweet spot. It boils down to good use of tracing info and prior preparations in potential memory areas. Most times new regions keep coming in and out in the memory map through the use of virtual memory apis like VirtualAlloc and settings being set by VirtualProtect, so I made a notifier application for any new regions created during a specific execution trace, this works for only user mode for now but does its job well. I used PInvoke to Win32 process read/write apis and some simple data management and filtering algo.

Now that the briefs regarding the fundamental toolset are done, let’s focus on malware signature related workflow streamlining. Many of us as malware analysts on the floor have to race against time to deliver the signatures, whether its 1:1 or 1:X sigs. Generic signatures take a little longer to do but are commonplace nowadays. Per malware signatures save time but with little reuse value after the next variant comes out and nearly useless for polymorphic malware (except the unchanged decryption loops or specific file based fingerprints). Emulation in AV products has not really changed the world in terms of being an automaton that can handle any situation, but functions rather as a compromise in order to pass comparatives well and keep customers happy for a while. It is arguably quite difficult to cover all potential states in a malware sample and the implementations have varying degrees of success, unless it’s an np-complete problem. The Sophail paper exactly illustrates the kind of snake oil peddling that goes on. But all is not lost, in the never ending maze of possibilities where any code region or mapped section can be a potential evil, it’s fair to assume certain rules that can identify particular classes of anomalies that are more of a trend in malwares or an unexpected or unauthorised activity that warrants further investigation. This is pretty much the basis of heuristics that is built upon. You get the idea. So if the impossible cannot be attained, it’s fair that we continue with something more tangible to the best of our efforts. Since the ultimate antivirus and the ultimate virus cannot co-exist in the same system it’s better that we keep our expectations realistic.

For much of my signature specific work the only tool that I really use is IDAPro and after seeing quite a few SDKs for signature development I have come to the conclusion that all that glitters is not gold. I won’t need to disclose anything specific but let’s just say that much of the analysis for signature extraction has nothing to do with malware analysis coverage as a metric. If file region hashes are the only way to make the signature SDK work, why spend time doing indepth analysis? Apart from your own knowledge gains there is a significant time lag that you have to deal with especially with delivery deadlines that round off at 1.5 hours. From what I have closely observed many of us simply skim over the sample to find something that catches our eyes, even the workflow is not defined for sig development. And reiterating the Sophail paper quite a few of the signatures are taken from sub-optimal areas. An experience anecdote – I learnt this the hard way when I was a new entrant and I had to stop analysis work to copy paste hex bytes so that the customer row gets over. Also none of the analysis data was being propagated to the support team. It got me pretty soon that the actual work done was very priority based and done only for the wildest sample in vogue. That’s weird, cos behind the scenes I was reversing everything from Chinese games with copy protection to virii variants, and never felt bored while doing any of that. The funny thing is that much of the signature work can be very well automated. I had done much of my own signature work using a slightly different approach so that it looks that I am working and at breakneck speed at that. Primarily the streamlining of the toolkit really did help me. I handled all hex bytes conversion and range checking as well as processing-events to dump the final formatted sigs to SDK specific requirements in C#. So I could do the rote jobs of sig creation along with high speed region extraction and analysis by using the features provided by IDA, with backend processing done in C# and custom analysis done at the same time, using a few of my own techniques. Thus the throughput was around 60-70 signatures in 2 hours. I think that’s a significant improvement over previous constraints. 2 per day per analyst was the last average done manually by our team. My methods are sometimes contrary to conventions in reversing as in most of you might be writing the next IDA plugin in C++ to do the bulk of the work with IDC scripts to do more mundane errands. I on the other hand took a little time to make an OCR application that does a good part of my analysis preprocessing by taking screenshots of IDA. I know it’s funny. But think of how the movie and game industries have used graphics and fx. It’s not so hard science and maths but it’s very appealing and effective all the same. Why not leverage that in reversing as well. So I learnt about clustering algorithms, image analyses techniques, boundary extraction algorithms, distance mapping and the like. It’s a lot more fun when self made constraints go up in smoke. With high performance shader algorithms and vector 3D quaternions in vogue, it’s high time we hang up vi and move on. The benefits are surely rewarding.

BUILDING A TOOL FOR ANDROID ANALYSIS

More recently I have got into the whole android thing over the past 3 months and have found it very interesting indeed from a lab point of view. Much of android malware has to do with java coding and using the android SDK. The primary tools I use are dex2jar and jdecompiler. The need for executing a sample is not really needed everytime, especially to figure the set of payloads it might trigger. I find that simple decompilation gives the analyst a good view of the java source code wherein the data mining can start. Android lends itself to so many properties per android package file (.apk) that much of its dynamic behaviour can indeed be described by them. I find that the AndroidManifest.xml file contains much of the important config data. It is here where you get the essential set of settings required during installation which is the android security model or making it an install or no install choice for the user. The identification related tags are the package name and android SDK version and level info. Then you have the list of activities, intents, intent filters, features, permissions, services among a few others. Permissions are of particular interest, followed by intents and services. Intents are like a messaging mechanism that apk files use for various custom events or to respond (intent filters) to other applications that send out that particular messages or intents or if a sample implements a broadcast receiver for system specific events like battery status. The SDK details the stages in the lifecycle of an android executable like OnCreate(), OnDestroy(), OnResume() etc., so it’s logical to start investigating from one of these starting points. Further on it’s ok to manually analyse the source code, but can this be automated for the most part. To illustrate the goals taking hints from x86/x64 malware, much of the payload specifics have to deal with api patterns. Like for keyboard loggers they typically use either GetKeyState()/GetAsyncKeyState()/SetWindowsHook() as api combos.

For process listing typically it’s GetToolHelpSnapshot32 ()/Process32First ()/Process32Next ()

Most memory based injection patterns use VirtualAlloc(), VirtualProtect(), VirtualQuery(), WriteProcessMemory(), GetThreadContext(), SetThreadContext(),CreateRemoteThread().

Why not we try to profile an android app the same way? The above heuristics are more like established patterns that are OS provided APIs being used creatively. The android SDK has classes and function interfaces for all provided features in an android phone or tablet. As most of the malwares have the sms based payload, the class used is Landroid.telephony.smsmanager.sendTextMessage()

OR

Landroid.telephony.smsmanager.sendText ()

There is usually the premium numbers and sms text strings either embedded in the source or read off an xml file or decrypted from a custom database in the apk package. Thereafter it uses one of the above apis and prevents any notification by calling the abortBroadcast() method. It then follows one of the many variations on the business models for getting money from the unassuming victim.

From kernel mode and user mode rootkits we see one common characteristic of stealth, so that the user is never aware of the infection. Similarly we can conclude that if an android application is using a specific api or a set of apis that subvert notifications to the user when it should, it is automatically suspicious as trying to be covert while sending unsolicited smses. Heuristics like this are essentially foolproof as long as the mechanisms are compliant as per analyses. An important thing in such rule development is the effective use of standard statistical techniques and proper data mining models. On building such a model based on larger sample sets I have come to recognise similar patterns that describe much of the behaviour related payloads that are likely to be triggered by the malware. After setting up a probability based model and fine tuning the input parameters I get a very resilient set of rules that extract the profiles based on static analysis. Fast and very relevant. I would like to draw comparisons from the law enforcement in the West, where institutions like the FBI have behavioural profiling units that try to give an accurate profile of the offender. It’s not guesswork but good use of statistics and trend analysis. Immediately I found the analyses sessions not needing to decompile everything to get the essential information out, except for the more complex types that require extensive decryption logic to extract files or figure keywords. Further on doing a batch level analysis the trends are very visible on the whole sample set. This automatically gives me a profile of the percentage of the sample set that exhibits a specific behaviour type along with their api sets and their implicit permissions.

At this point any sort of signature format can be made for not only a specific sample but rather a more generic one that would catch a larger portion of the data set. I am happy for now.

The Findroid paper describes additional analyses parameters like file structure analysis and strings based analysis. Also the concept of privilege escalation in converse wherein the apk file for a malware typically likes to configure more permission that it requires and hopes to et installed by a gullible user, over privilege requisition. It makes sense, and very good statistical work has been done by the Malware Genome Project researchers which I have also validated in my lab gives me rank based model of the kind of permissions and their frequencies most likely to be used by the malware dataset and the benign data set. I like these parameters as well and am looking forward to include all of them and more in my own analysis toolkit. Much of my excursions in the past months have filtered down to my tool SCIENT – Android Malware Gatecheck, which automates most of what I have described while having other goals as well including making it interactive for analysis sessions and providing a dashboard like interface for fast data retrieval.

Finally the main issues you will deal with are embedded malware, exploits, reflection and JNI in android malware.

CONCLUSION : The benefits of streamlining your toolkit cannot be overemphasized. It’s paramount that as contemporary researchers we should embrace related disciplines in the reversing field and combine them with other related fields from gaming tech, graphics programming, criminal psychology for behavioural profiling and the like. There is so much to do that your tools should not be a hindrance in your next analysis session. RTFM or make you own tool along with a RTFM instruction. Happy reversing!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s