I will discuss a few topics that motivate me to make software the way I like it. Currently I am writing an android analysis tool and since I am using C# for the same, I do have a few things to share as notes from my daily excursions in tool design and coding my own toolkit to help in my reversing tasks. The considered agenda for the same are –

1. UI and ergonomic design

2. Leveraging C# for your daily reverse engineering.

Firstly, I would like to describe the motivation of my tool – It uses data mining essentials and workflow ergonomics implementations to strive for a better analysis and reporting environment for android analysts. Currently it focusses on automated analysis, with more modules for both user interactivity and analysis to come. The main datasets come from the AndroidManifest.xml file and the APIs used from the android SDK in the dex file. Once the essential information gathering is done, the datasets are ranked, weighed and sorted and then mapped with existing research datasets to give a prediction of the most likely used API’s in the apk file. This is beneficial as it tends to give a fairly confident profile of the potential behaviour that could be displayed by the apk sample in question. Thus we get an individual list of extracted per sample views of the analysis parameters and finally a batch wise or complete dataset wise permission and api frequencies. This histogram then gives us the trends in a particular sample set. The larger the sample set the better for obvious statistical reasons. Further internally a score is mapped for each apk file which is then used for the final suspiciousness index and if a threshold is crossed it would signify a malicious numeric. Much of reporting framework has been done in a minimal manner to function more like an interactive dashboard. So this article will describe the concepts and methodologies I have used for the tool design and the coding approach I have taken using C#. So this is more of an expose of the approach rather than the implementation, for that the source code is available.The current code base has around 5000 loc. Without further ado, let’s proceed.







When you deconstruct the acronym GUI, you get the essential parameters needed to represent a metaphor within the constraints of the computer visual layers. The layers incorporate both the hardware features and the software translation mechanisms. You have to instruct the system how to communicate between the layers and make it both feasible for computation and useful for the human computer interaction. Current abstractions in graphics and interactive interfaces enable you to think and code in more natural terms. Among the countless visual libraries I am focussing on the Windows platforms’ GDI+ and DirectX APIs to illustrate a few concepts that may just be useful in your next software tool.

It’s conventional wisdom of sorts in the development community that programmers make bad designers, which might just be statistically true due to various reasons, though I can pretty much point it to the bane of all designing paradigms – the commandline. This whole way of working hails from an era when the transition to blinking lights from punch cards was a big step in computing, and little further than that. Much of the great work has been done using this interface, but then I guess the masterpieces in Louvre don’t have a commandline. The visual arts have been with us from the dawn of mankind and as much as programmers hate it (not all) it’s here to stay. In fact we can inculcate so much from the existing legacy of visual splendour and imbibe much of their design principles in our own software, using Photoshop to draw a moustache on Monalisa not including.

Text is not the most natural way to work, either from an evolutionary standpoint or the current implications of using visual symbols that communicate so much information along with something not so intuitive like sound. In fact many programmer geek types are either autistic or dyslexic, so I guess that explains the cryptic text in most terminals’ history logs and pretty much defeats the use of text. Text and language came a lot after the eyes and the imaging circuits in our brain have been training apriori for literally thousands of years. This immediate evolutionary advantage is to be availed of. So how can we do this?

Let’s study some of the more pertinent parameters of how we perceive visual objects and infer meaning from it. Let us start with the ‘LINE’. How many properties can you think of that you might use it for?

– width (thickness)

– length

-pressure (ink density, etching depth)

– stroke type (dotted, dashed, smooth)

Are some of the important ones.

Next up, the ‘CIRCLE (or a special form of an ellipse)’.

– Radius

– Circumference

– Bounding rectangle

And so on and so forth. Try to list out the properties from other basic shapes like a rectangle, square, triangle etc. These are the primitives that can be used to describe pretty much any shape, the triangle being one of particular use especially in 3D graphics. It was ancient geometry where it was discovered that a sphere can be represented or approximated using triangles as a primitive shape.

Moving on to curves – simple ones like the arc which is a part of a circle andbezier curves and splines are the more common ones in graphics and design.

Another important fundamental concept is the co-ordinate system and there are quite a few of them. Most of them are inter – translated using maths. A co-ordinate system essentially is a way to represent variables, and much of it is done visually even though the computations are represented using matrix maths. The essential ones are the Cartesian coordinate system and the Polar coordinate system. In the first, 2 or 3 mutually orthogonal planes are used as axes, wherein the variables are assigned and their values mapped. You already know it as x-axis and y – axis for 2D graphics; add the third z-axis and you have 3D coordinate system set. Remember much of this is entirely conceptual and it’s just our way of making sense of what we sense around us. Time being the 4th dimension also being a universal invariant is also included if animation does not seem too esoteric. In theoretical physics, dimensions have to be invented just to keep up the correctness of various equations, so we don’t know it all yet. Even then this compromise does seem useful enough to warrant full use in so many ways. Polar coordinate system is used to plot angular motion and variables in a 360 degree range. It’s a way to work with circular motion. In fact much of game programming uses these two axes mainly. Others are also contextually useful, like the cylindrical co-ordinate system which combines a polar and a Cartesian co-ordinate system.

With the brief theory above, let’s move to how it all translates to our screen. The computer display matrix is assigned to a 2D system. The width of your screen is the x- axis and the length is the y axis. The y values are plotted from the top to bottom of the screen so the left most corner in the top has co-ordinates (0,0). For our programming purposes all co-ordinates in the various visual components have their own screen co-ordinates that have the same origins, though they are translated to different starting points with respect to their locations.


UI design principles do incorporate the very basics and a lot more. Design is an umbrella term that takes the various arts and sciences under its hood. Visual design, sound design, architecture have so much in common that the only difference is the audience and the sense titillated. So much of western Baroque music has been inspired from architecture of its time from churches to royal mansions. The ornate shapes are directly transferred to complex contrapuntal lines that dictate much of this eras sound. Even today, the trends of mimicking or translating differing inspiration sources and finding a common theme are alive and kicking. Couple that with the combination of naturally occurring mathematics – the Golden Ratio and Fibonacci series among others and we begin to see a muchintertwined existence between the disciplines. It almost seems like Dieuor God has given us the variety of options to service whatever senses we are most affiliated with.

UI design being more a science of compromises, it’s a larger proportion art than science, though arguably science is art that tries to believe that it’s a separate identity.

Let’s take into account how a multitude of shapes can convey information. We have the attributes–

– Proximity

– Depth

– Colour

– Shape

– Size

– Orientation

– Clustering

– Quantity

– Trends over a timeline

Proximity can convey the nearness or farness of a particular class of objects. This can be used to convey the value of informationwithin a range.

Depth can be used to convey any other attribute attached to that particular object class. Like if a complex number is used to calibrate a particular value, the imaginary number ‘i’can be represented in the z-axis.

Colour can convey the presence or absence of a particular piece of information.

Shape can convey the type of information.

Size can convey the saturation of that particular information or the threshold.

Orientation can be relative to the coordinate system or tied to a specific attribute from a reference standard. This could give the tendency score of a particular class of objects.

Clustering can convey the familiarity between the set of objects collected.

The total quantity of a class of objects is an immediate parameter of use.

Using the 4th dimension can give us the timeline of events as various shapes come in and out in an animated fashion along with changes with information as sizes and colours evolve. This gives us a good data mining set for further processing.

So you should already get the idea of how good use of a single HUD [Heads Up Display] can convey a lot of info with something we learnt in drawing class.

When you design your UI in Windows the first platform for experimentation is the ubiquitous Windows Forms. How you place controls in the layout grid? How much information should be presented to the user? How much of contextual views should be used? What colour combination and fonts should be used? If you design your own custom controls, do they jar with existing paradigms that users are familiar with? Are deeply nested menus the solution to all problems in life? These are some of the important question you should ask while designing you next app.

If you read legacy books on the first windows APIs for visual design in windows, it’s pretty interesting to know how things have become more convenient but is adapted to our more masochistic perversions, and thus in reality never actually changed.

Nowadays there is no need to elaborately fill data structures out and pass reams of header data just to display you form. It gives a good sense of nostalgia but thank God or rather Microsoft for encapsulating much of boring by the numbers typist-tapestry. Saving our already crazy heads is not just the only improvement rather it’s the carpal tunnel that’s taken care of(or is it?). Nowadays it’s just drag and drop. What do you drag and drop? The same old ‘menubar’, ‘toolbar’ and the fantastically useless ‘statusbar’ on the same old squarish box called the Form.

So think about control placement for the kind of interactivity and visual use of them. If you notice you are either right handed or left, most are right handed, if you are ambidextrous then this particular feature is irrelevant, unless you have a favourite hand, then it is.So if you notice ATM’s and piano keyboards, what’s in common? It caters to the majority by placing the controls that facilitate right handed people to go about their tasks. Left handed pianists don’t exist and just like in cricket use the tool as usual and adapt to the scenario. So what happens if you place the essential tools in the top rather than a specific position geared for ergonomic efficiency? You get an average that is not ergonomic but expected so it eliminates the surprise factor. I guess you get the compromise part of this whole endeavour.

My proposed solution is to give the important options based on the handedness so what you get is vertical toolbar that enumerates options from top to bottom in the order of use or importance of data. This toolbar can be shifted as per the requirements of the user’s handedness.The use frequency of a particular item can alleviate situations of regular use. Ideally this should be placed near to the essential controls so that minimum movement is required to get the data out in a minimum number pixels travelled using a mouse (hey Ijust got a new measurement!). Ideally minimal use of the keyboard for navigation purposes is recommended.

Next, the number of tool items should ideally be and odd number as it’s been discovered that humans tend to remember odd numbers with greater accuracy.

Next, the mutual spacing between the different items can have a very beneficial effect when done well. Regular spacing is recommended to give a sense of balance, but too much balance reduces tension and that reduces the usability factor of the design.Taking parallels from music, western music specifically incorporates the use of a measurement concept that multipurpose and sonically relevant in day to day use for any literate musician. It’s the basis of chord construction, counterpoint, choir arrangements etc. It’s the all-purpose INTERVAL. Much of music theory is just an intellectual dance around conventions that have a musical context that vary from contributing culture to culture, but this one is resilient. An interval is essentially the distance between two pitches or notes in musical jargon. Its conventional musical wisdom that has accumulated over the centuries, and all such distances have been assigned a pleasing factor that is surprisingly binary in nature- Consonant intervals and dissonant intervals.For those who are musically literate you understand the different intervals like the perfect 5th or the minor 3rd. The sounds they describe are very unique to the 12 tone tempered scale, which is again a compromise on the sound structure to enable the instruments to play in the different keys without losing out on the musical effect. In a typical octave the octave itself is the only whole integer ratio the rest are weird fractions.It’s just the way our number system is aligned or misaligned to evaluate the natural ratios permeating our universe.In the larger scheme of things, when constructing a musical passage, the tensions are just as important as the resolutions. That itself is the guiding mechanism where the listener expects a unifying theme that also surprises him and sooths him, takes him on a journey. Visually think of this as using a simple square as one figure and using a square with a rhombus intersecting it. That variety stirs things up to give and ebb and flow. If it does not flow it’s a potential magnet for mosquitoes. You get the idea.

Keeping the important tool items closer and keeping other items spaced further immediately makes use of the above theory.

Next, effective use of font and font properties are also very useful. Not all fonts are equal, or to be more specific – all fonts are equal, though some are more equal than others. Using comic sans on a tech presentation does not make sense, but neither does using times-roman with a small size. For regular app, perpetua, helvetica, courier and sans serif fonts are good enough. If accessibility is not the priority just yet, set the font size to something that all users can use, not only the ones with an Apple display, typically I set it in the 10-13 range.Typography is a rich subject that encourages research on your own motivation.

Colour theory pretty much is all about the choice of colours within the medium constraints that give the best effect. The Kuler wheel at the adobe website gives a good demonstration on how colour combos go about. The thing I learnt is to stick with grave colours, and use complimentary colour schemes that are easy on the eye. This has to do more with how we have been seeing things since day one. Soil has a dark brown hue. The skies have a cool blue colour on a good day. Heavy nimbus-clouds have a grey offering that inspires poets. Trees have a soothing green shade. Much of nature’s hues are called ‘earthy colours’-duh.Black is not really a colour more like the absence of it.In fact it’s really difficult to get black, so it’s more like using concentrations of brown to approximate black. A colour compromise.We are not used to seeing fluorescent green every day for a long time, so we better stick to it. Also our retinas are sensitive to long range wavelengths like red. So warning devices can incorporate red to signal a specific state; use red or nearby ranges like orange to intimate the user of something happening.What you see on screen is not always what you will get in print, so if colour is an important issue make sure it’s checked for range validity. Most graphic apps do it, by denoting safe colours. This is mainly due to subtractive mixing of colour pigment molecules and additive mixing of light particles. Read the physics on your own.

These things are taken for granted but when used properly should give the intended increase in the usability factor.

Getting to interactivity, single clicks and double clicks should be used to their max, without going beyond the 1 click rule. It’s far better to toggle using any kind of input, like mouse or voice recognition or tablet and the like.An HUD view that’s context relevant and fast is really helpful, otherwise context specific views are welcome as well. Data should be transparent among the different views, this gives a synchronised effect.

Overview and zoom are also important factors for large datasets, it an essential navigation device.

Scrolling can be avoided if the data is well formatted or the scrolling especially horizontal scrolling can be made more intuitive by using better interaction with the display itself. Editing modes can be used to switch between navigation and display interaction for data manipulation. These abstractions immediately help the user to be more aware of his current status in the application.Further the use of keyboard modifiers CTRL &SHIFT is very helpful indeed to accelerate and decelerate the scrolling parameters, like number of lines per scroll. ALT and SPACE are close together so I avoid ALT but SPACE can be very useful for the main activations.

Good uses of selection mechanisms also allow the user to feel in control of the application which increases the use.

Finally the idea of MODES is more interactive than deeply nested menus.

Ever had this event when you did not want to click the close button but somehow the mouse pointer got there are the top right/top left on Mac and closed the form for you? I never feel safe with the close button looming like a sword over my head, so I propose using a form region and double click for exiting. It’s so intuitive, you start an application by double clicking it (most do) so you might close it in a similar fashion. Ittakes the thinking, searching, pointing out of the picture.

Nowadays I am studying a very visible though niche domain called FUI design or Fantastical User Interface design. Check out Hollywood movies and you look at every computer screen while the protagonists of the story are doing something in it; and you see some sort of weird, out of this world, amazingly fast, amazingly good lookingpanels that display text and visuals in a great way. Also check out Mark Colerans work in this area, it’s just amazing how much of UI designis so much of hardwork. Getting it to look good and be useful enough is the challenge. Well for much of windows related stuff getting to look good is THE challenge, usability factors come second. So the essentials I could understand from such work are that an element of inspiration or creativity should belie in every work, but the essentials of design still apply. Use of large text to communicate information, using flashing and timers, animations that simulate some sort of movement, and populating the screen to give an illusion of complexity are among the other gems from this goldmine.Also creative inspiration from real life products and other software tools and of course a spice of fantasy or the unreal is the melange that creates this beautiful world of high tech machinery. And not a single line of code is written. It’s all done in Photoshop and animated in AfterEffects.Well we can code, so why don’t we do something about it?

Continue to experiment with better use of visuals to communicate ideas and make your application comfortable to work with. Nothing is set in stone, and more than we would like to admit it, mistakes have been made, a lot, so why not learn from them.

Let’s get on to understanding the GDI+ library to leverage the use of C# and build any kind of visual unit you want. The moment you use GDI+, the confines of a form breaks down, at the very least visually. You can draw any shape you want, any colour you want, use any picture you made or like and engineer your app into something from your dreams.

This is not a C# primer so I’ll get down to the uses of some of the more frequent functions that I use from the API.

The following namespaces have to be included,

using System.Drawing;

using System.Drawing.Drawing2D;

What you do is one of the two methods inorder to draw something on the screen. You can override the OnPaint() method and write graphics code there OR you can write all the graphics code in the Paint event. The OnPaint() method invokes the Paint event. The paint loop is literally an infinite event based loop which handles the graphics refreshing for the corresponding window. The OS handles the window refreshing using callbacks.

Write this event handler after the InitializeComponent() method in the form constructor;

this.Paint += newPaintEventHandler(Form1_Paint);

The first thing I normally do is enable the double buffering for any specific form/user control from the properties view. Caveat : A few books, even good ones, tend to use the panel control for their examples. This is not really the best control for examples as such because of the lack of double buffering in its properties. A workaround needs to be done starting with inheriting from the parent controls than getting the handle to the Device Context etc. So if the flickering starts, which is inevitable due to this limitation, don’t say I did not warn you, especially after 1000 lines are poured into it already.

It’s far simpler just to use a UserControl.cs class from the new items menu. This class implicitly supports double buffering as one of its properties.

What double buffering prevents is flickering from too much refreshing activity done by windows. When a portion of screen is redrawn, the process is actually quite linear, this involves some amount of latency and refreshing rate to human eye sync related issues. A buffer that build the bitmap image and then uses that image for the final picture makes the transition flicker free.

The second thing I do is enable Anti-Aliasing to prevent the shapes drawn to get aliased. Aliasing is due to the nature of digital graphics where every picture element represents only a sample of the image colour value at that location, it’s not really continuous. So that lost detail comes up in curves and circlesas jagged lines. If you try to make a circle using Lego square blocks you will get a circle but you will also see the rectangular edges sticking out. That’s pretty much like aliasing. So what anti-aliasing does it recolor the surrounding pixels to slowly blurring and fading shades of the border colours. The final effect is an illusion of a less jagged line.

Just add this line in the paint loop.

e.Graphics.SmoothingMode = SmoothingMode.AntiAlias;

This could be performance intensive as the algorithms required to do this is quite complex. So if the graphics are involved then you could try improving the rendering logic so that the performance may increase.

After the above, assuming I want to keep the form rectangle, the drawing functions I use most are –

DrawArc(),DrawEllipse(),DrawRectangle(),DrawString(),DrawCurve(), DrawLine(s)(),DrawPath(s)() and their Fill<Arc,Ellipse,Rectangle>() variations.

Think about it, these are pretty much all you require to come up with any sort of shape. The rest are just convenience based function overloads, taking a few other parameters for a very specific use. These are essentially the graphic primitives that can be used towards drawing your next masterpiece.

Now, you need some sort of abstraction to come up with a virtual pen, brush, a colour palate; and a text font (think classical calligraphy) to emulate what we do as humans in real life and translate it to code. So indeed these have been abstracted in the form of API’s that take in a set of parameters or provide static constants for a regular set of values that enable us to think in real world terms. So, in your own drawing class prerequisites as a child, you must have asked your parents for your long list of drawing stationary. So that might include a brush set, with different thicknesses, one for light strokes the other for broad strokes, a set of colour pencils, different grades of pencil tones, a colour box containing the most used colours, or an oil paint set or acrylic colour mix, stencils for text tracing and so on. Similarly, the Pen, Brush, Font classes do the same.

e.Graphics.DrawArc(newPen (newSolidBrush(Color.White)),newRectangle(newPoint(0,150),


Take a look at the line above that draws an arc. The new Pen() instantiate the Pen class, and the constructor takes the parameter Colour.

new SolidBrush() is the parameter passed which instantiates a SolidBrush object. Color.White is passed into the instance constructor. Color is a struct and White is the colour chosen from this struct. There are other kinds of brushes as well like the LinearGradientBrush and the HatchBrush classes that simulate colour gradients and hatch styled patterns often used in engineering and architectural manual drawings, with the technique lifted in code.

Font class instances provide text based configuration for use in graphics.

Font f = newFont(“Arial”, 17);

Wherein a new instance is created and the Font Family is passed as a string with the font size to the constructor.

A good method to determine the strings size for calibration during string placements and zooming is the –

e.Graphics.MeasureString(“Notes”, f))

The MeasureString() function which takes the string and the font instance assigned to it.

This gives us SizeF structure as a floating point set of numbers that denote the width and height of the text string on screen in code.

The Point struct is no doubt very useful for both graphics calibration during runtimeand finding the value of the mouse pointer once the requisite mouse events are processed. Also building shapes in runtime would require the collection of graphics data, and point is one of the essential ones.To draw a custom shape a set of anchor points can be provided and the lines connections are taken care of by the function. The Point structure provides two integer or float numbers (PointF) for the X and Y axes.

So end of the day, you have a set of APIs that enable you to use a few lines of code, automate the process, put some logic into it and let it draw everything in the best way rendered using display technology, while being abstracted to be used as simply as possible.

Further a set of graphic mechanisms called TRANSFORMS are very useful as well. If the origin of a graphic needs to be changed, you could use TranslateTransform(). If you need to blowup or zoom out of a graphic, you use the ScaleTransform(). Any kind of rotation would use the RotateTransform() method.

TranslateTransform takes the offset from the current origins as the new origins, a new (x,y) pair

ScaleTransform takes factors of multiplication for the x and y coordinates.

RotateTransform takes the angle required to rotate to.

The Timer class is essential to many other activities that might require changing screen modes or activating a specific mode or nay sort of animation. The interval property in milliseconds and the Tick event are all there is to setting up a timer. Then it’s just start() and stop() methods.

Finally the Invalidate() method is very useful for the screen refresh after any specific event being handled or you want the screen to update the new data. It also takes a bool parameter as TRUE/FALSE for the children controls to be invalidated as well or not.

These essential methods and data structures are all you need for most of your graphics logic coding. The rest is how you handle the various events, good use of flags to switch between modes and a good understanding of essential geometry to figure out the maths to draw the constructs on screen, or animate them in a specific manner.

Funny thing is much of Windows programming is all about using API’s. If you look at 7 years old assembly code that uses Windows graphics library like GDI, compare that with the C code that uses GDI/GDI+, and finally compare that with the C# code that uses GDI+, the differences are minimal. The concepts are entirely the same, even the API methods used. The languages are generation evolved and pretty much the only differing factor, beyond the obvious ones. C# being the latest, has successfully built wrappers around essentially unsafe code towards the .NET paradigm without sacrificing any of the usability. DirectX has become obsolete even after much porting and wrapping being done. But GDI+ is here to stay.

I highly recommend that you learn any graphics application like Photoshop so that much of the prototyping and background graphics for the controls can be done in the backend and that the graphics code and interactivity can be used as very good extensions to it.

Then, it’s the keyboard and mouse handling that has to be dealt with to maximise interactivity. I normally disable any sort of factory made form styling that comes with windows. I build the prototype visuals in Photoshop and then use a simple form and then set the background image to it. Repeat ad infinitum for all the rest of the controls. So then how do you handle moving the Client area of your application within the confines of the screen? The solution is to use 3 mouse events which are – MouseDown, MouseMove and MouseUp. These events are fired when the application message queue receives these particular translations of your clicking activity through windows message pump. To simulate a click drag operation to relocate your application to another co-ordinate, do the following.

Set up these event handlers. There are two ways to do it. Use the properties events view or write event handling code after the InitializeComponent() method in the form constructor.

this.MouseHover += newEventHandler(Form1_MouseHover);

this.MouseMove += newMouseEventHandler(Form1_MouseMove);

this.MouseWheel += newMouseEventHandler(Form1_MouseWheel);

this.MouseDown += newMouseEventHandler(Form1_MouseDown);

this.DoubleClick += newEventHandler(Form1_DoubleClick);

this.Paint += newPaintEventHandler(Form1_Paint);

The idea is to capture the current point co-ordinates, save it and calculate the offset to the new location which the mouse points to on the click drag initiated by the user. Finally add the offset differences to the original location to set the new one. It’s actually very simple and makes the windows max, min, close trinity appear (or disappear) trivial to implement. I normally don’t use max and min, as I prefer to make the environment very streamlined to nearly eliminate excessive tool usage. This immediately brings leaps and bounds in productivity. After working in AV firms I noticed how my colleagues used so many tools to get a single result and I could see the severe ergonomic issues they were facing just to get a hash value. As I said it’s masochistic. This is pan domain, and in fact from my excursions in music studios, many of my friends who use music equipment to produce music have the same problem. It’s always best to get the most out of a single tool or two maybe than fill a house full of stuff that gets less that 1% of proper use to get 10% of the work done in triple the time. Results do the talking and workflow ergonomics is my favourite coffee table conversation starter (not with the fairer sex).

Point p; bool clicked = false;

void Form1_MouseDown(object sender, MouseEventArgs e)


if (e.Button== MouseButtons.Left){

p = newPoint(e.X,e.Y);

clicked = true;



void Form1_MouseMove(object sender, MouseEventArgs e)


if (e.Button == MouseButtons.Left && clicked == true) {


this.Top += e.Y – p.Y;



privatevoid Form1_MouseUp(object sender, MouseEventArgs e)


clicked = false;


The above lines do the job. You mainly use a Point struct instance as a repository and a flag to signal events and propagate the offset values for the final calculation.

One more usability decision I use is automatic focussing on the control/form that the mouse is currently hovering on, this eliminates the use of multiple alt-tabbing to get to the destination window. Ideally if the toolkit is small and well integrated the power of notalt-tabbing is immediately evident. So,to skip from window to window and work already, just point to the relevant view and start banging away.

void Form1_MouseHover(object sender, EventArgs e)




Keyboard events are handled in a similar manner using the events KeyUp, KeyDown and KeyPress.

A very good use of custom views I learnt was from the music software Logic Audio from Emagic GmBH, now Apple Logic Pro. This awesome software masterpiece(any electronic musician worth his salt knows this software’s legacy) used a navigation mechanism called screenshots, where in the user assigns a pre- arranged set of the software elements provided for a specific view. So all of this can be customised by the user, say one for recording, other for sampling and keyboard mapping among other workflow views and finally assigned to a number from 1-99 along with a keyboard modifier CTRL. This is a very efficient way to work with something as complex as music.


Leveraging C# for your daily reverse engineering

C# is my favouritelanguage and I definitely intend to stick with as the community is amazing and more and more programming paradigms are incorporated in .NET. From Eiffel to F#, IronPython to managed/unmanaged C++/CLI, you can’t go wrong with this one. From Windows to Xbox the power is visible everywhere. I will discuss the classes I use with greater frequency when I make my own reversing tools and a few pointers here and there.

Most of the tools I make for my day to day work involvethe following things from C#–

1. Multithreading(System.Threading.Thread/BackgroundWorker component)

2. UI and resource hog algorithms decoupled i.e. responsive applications

3. Extensive use of events and delegates for communication between the various forms and controls.

4. File system classes (File, Path, DirectoryInfo, FileInfo, FileSystemInfo) that encapsulate the Directory and File objects.

5. FileStream/MemoryStream classes to work with dynamically read or generated data.

6. BinaryReader and BinaryWriter classes

7. Extensive use of collections and generics.

8. Structs – readonly fields/Enums

9. Strings manipulation classes and methods.

10. Properties –get and set accessors.

11. Regex

12. Process class for starting applications and reading commandline output.

13. GDI+ Graphics/3D

This whole set can be invoked by importing a few namespaces.

using System;

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Drawing;

using System.Text;

using System.Windows.Forms;

using System.Drawing.Drawing2D;

using System.IO;

using System.Threading;

using System.Diagnostics;

using System.Text.RegularExpressions;

The above set sums up much of my namespace laundry list.

Let’s delve into the classes themselves.

Much of reversing primers includes format parsing of some sort, whether it’s a PE file or a Dex file or PDF or JPEG and so on and so forth. For this I use a byte array. It’s very convenient to have direct representation of the actual bytes as integer values from 0-255 for every byte of a corresponding format.Once that is done the parsing involves use of mainly walking through the array and extracting specific length at specific offsets. These data are extracted from the headers of the respective formats. Of course if editing the file is required then using an array would be expensive for large files so a List<byte>type can also be used.While using BinaryReader to fill and instantiated array is fine, the performance is far better when using File.ReadAllBytes() method. This takes the path of the target file in the filesystem, could be any binary file, and then reads it to a byte [] as a return value. For reading in a series of files for later manipulation in memory, I use a List<byte []> or make it more nested by adding a list into another accumulator list like, List<List<byte []>>. It’s a lot simpler while enumerating long lists for graphic displays and this helps a lot, while being effective for searching a particular value as well as addressing a specific type within a type.

I use structs extensively for any sort of custom data structure template along with values retrieving and setting implemented. This saves a lot of time for later processing and being a value type is easily passed around within List<struct type> for better manipulation,e.g. List<dexFileStruct>.Sometimes references don’t work the way you want with structs, so resetting a struct field other than the constructor is a bad idea. The solution is to make the fields readonly and instantiate a new struct and work with the new set of fields and point to it if needed for lists or stacks. Don’t try to reset an existing struct field member.

Properties make life a lot easier for both classes and structs. Also exposing types using this method immediately gives a security factor by ensuring the entry and exit points of a particular field value.

Moving on to filesystem access and enumeration, the best way to get a list of directories is to instantiate the DirectoryInfo class and get the FileInfo array for every directory within the root. This process can be repeated for every directory level.Using FileSystemInfo class also works but unless you need extensive drilling capabilities I suggest using recursion with the above mentioned classes. Also FileSystemInfo is a bit buggy sometimes and crashes for no apparent reason.

DirectoryInfo d = newDirectoryInfo(targetPath);

DirectoryInfo[] di = d.GetDirectories();

foreach (DirectoryInfo iin di) {

FileInfo[] f = i.GetFiles(“*.dex”);

foreach (FileInfojin f) {


ProcessStartInfo ps = new ProcessStartInfo();

ps.FileName = “cmd.exe”;

ps.CreateNoWindow = true;

ps.UseShellExecute = false;

ps.Arguments = “/c ” + Environment.CurrentDirectory + “\\dexdump\\dexdump.exe”

+ ” -d ” + “\””+j.FullName+”\””;


using (Process pi = new Process()) {

pi.StartInfo = ps;


string temp = pi.StandardOutput.ReadToEnd();

using (FileStream fs = newFileStream(Environment.CurrentDirectory + “\\dexdumpOutput\\” + i.Name + “.txt”, FileMode.Create)) {

using (StreamWriter sw = newStreamWriter(fs)) {




The above snippet illustrates the use of commandline through cmd.exe and collecting the output of another commandline tool to memory and then to file. The ProcessStartInfo and the Process class are used for the same, note the various fields in each class set to get the intended output. The separate classes are a good design decision in code when the parameters to a class are numerous and complex, its arguments can be encapsulated in a separate class which can be referenced by the other class requiring it. Very robust.

Next, threading can be really simplified using the backgroundWorker component. A C# component exposes functionality but does not provide a UI.So a backgroundWorker exposes three events DoWork, ProgressChanged and RunWorkerCompleted. It also exposes a Boolean property ReportsProgress. I normally don’t use cancellation a lot and build the UI around it, taking things up in chunks and then processing them. To initiate the background process, you need to call the RunWorkerAsync(<Object argument>) and pass any input parameter, that needs to be typecasted inside the DoWork eventhandler code, this could typically be a file/folder path or a list of user data types sent for processing .The DoWorkeventHandler is where you write your most resource hogging codes. If any updates are required during the operation, you can send a percentage completed integer value along with an object instance that can contain the userState, this could be any data type, which has to be typecasted later on, in the ProgressChanged eventHandler. After completion of the task the RunWorkerCompletedevent is triggered and the handler can be used to write code that completes any task post completion. You can use as many backgroundWorker components as needed giving maximum flexibility.Couple that with Timer class instances and you get a very good threading model implemented.

Next up, strings are immutable in C#. There are classes provided that make efficient use of memory and processing power to manipulate strings. Simple concatenation for intensive strings is a lot more expensive than using any one of the classes dedicated to working with strings. I many a times need to get a character array so that each string literal can be thoroughly analysed. The <string type>.ToCharArray() method does just that.I find the Trim() method very useful for removing specific characters from a string.Split() method takes a char [] and splits the string at those pivot points. The StringBuilder class is very useful when building long lists of strings after extensive parsing. Very simple just instantiate and use the Append()/AppendLine() method.

The Regex class’s IsMatch() method returns a bool value for a string pattern against an input string. Quick and dirty. “[aA][dD][1-9]” searches for any string containing upper or lower case ‘a/A, d/D’ and any digit from 1-9, in that order and sequence. It’s a very powerful method for fast extraction of certain strings from long logs or dumps.

So much of the real world code that I write uses all and any of these in various ways dictated by results required.

For my current software pursuits I am using all of what C# can offer, but I tend to keep things streamlined and once I find a good set of classes I begin implementing the logic myself using the code constructs provided by the language. Thus, youhave I/O, multithreading, regular constructs for programming, extensive graphics support, streams for fast byte level processing, excellent debugging utilities, provision for unsafe code and use of the Win32 API if need, networking code among others.I think the power of rapid application development is very evident and the benefits far outweigh any cons of using C#.In fact you can use just about any language that’s in vogue and collaborate with other developers on a common platform. In the end I think that’s the biggest advantage. Think about it, are you DotNet wise, and if not what are you missing out on?



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s