Search this blog ...

Tuesday, March 29, 2011

Tools to locate class file in JAR / CLASSPATH

There are a number of different approaches for obtaining the location (jar/directory) of a class at runtime.  The following approach work pretty well…

Class c = ...

You could also try VM debug options at startup such as:

Trace loading of classes.

Trace all classes loaded in order referenced (not loaded).

In terms of static/offline location of classes, most people will suggest jarbrowser, jarminator, or brute force by leveraging a combination of the unix find command, unzip –l, and grep.

I’ve created a derivative of the offline approach.  Essentially I have three scripts:

  1. jarclasspath is a shell script/java class combo used to formulate a valid and complete list of libraries and directories that are referenced from the specified source JAR by way of META-INF/MANIFEST.MF Class-Path directives. A classpath containing a single JAR file could ultimately expand to hundreds of libraries, should that top-level archive specify Class-Path directives, and in-turn the dependent libraries provide Class-Path directives and so-on.  See this post for more details regarding manifest classpath attributes.
  2. jarcheck is a simple perl script that checks for presence of the specified class file in the specified JAR file/or directory. Note that Class-Path manifest directives can reference JAR files and/or directories. This tool supports searching both.
  3. jarwhich is a demonstration shell script that shows how to leverage the jarclasspath and jarcheck scripts above.  This particular script will search the CLASSPATH (and dependent libraries/directories based on Class-Path directives) to locate the requested class file.


Using my UCM flavoured WebLogic Server as the basis for the test, I sourced $DOMAIN_HOME/bin/ to set CLASSPATH.

This resulted in a CLASSPATH containing just 16 entries, however, all is not as it seems! :

/u01/app/oracle/product/Middleware/oracle_common/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar: ...
/u01/app/oracle/product/Middleware/wlserver_10.3/server/lib/weblogic.jar: ...

% echo $CLASSPATH | tr ":" "\n" | wc -l

The weblogic.server.modules_10.3.4.0.jar referenced above in the CLASSPATH however has a Class-Path manifest directive referencing some 180 additional dependent JAR files:

% echo $CLASSPATH | tr ":" "\n" | grep weblogic.server.modules 

% unzip -l /u01/app/oracle/product/Middleware/modules/features/weblogic.server.modules_10.3.4.0.jar
Archive:  weblogic.server.modules_10.3.4.0.jar
  Length     Date   Time    Name
--------    ----   ----    ----
     8880  03-23-11 05:00   META-INF/MANIFEST.MF
--------                   -------
     8880                   1 file

% unzip -d /tmp /u01/app/oracle/product/Middleware/modules/features/weblogic.server.modules_10.3.4.0.jar META-INF/MANIFEST.MF
Archive:  weblogic.server.modules_10.3.4.0.jar
  inflating: /tmp/META-INF/MANIFEST.MF 

% cat /tmp/META-INF/MANIFEST.MF|more
Manifest-Version: 1.0
Implementation-Vendor: BEA Systems
Implementation-Title: Oracle WebLogic Server Module Dependencies 10.3
Thu Oct 28 06:03:12 PDT 2010
Feature-Name: weblogic.server.modules
Source-Repository-Change-Id: 1374366
Class-Path: weblogic.server.modules.wlsve_10.3.4.0.jar ../com.bea.core
.antlr.runtime_2.7.7.jar ../com.bea.core.descriptor.j2ee_1.5.0.0.jar
../com.bea.core.descriptor.j2ee.binding_1.5.0.0.jar ../com.bea.core.d
escriptor.wl_1.3.3.0.jar ../com.bea.core.descriptor.wl.binding_1.3.3.
0.jar ../com.bea.core.datasource6_1.9.0.0.jar ../com.bea.core.datasou
rce6.binding_1.9.0.0.jar ../com.bea.core.beangen_1.7.0.0.jar ../com.b
ea.core.descriptor.settable.binding_1.7.0.0.jar ../com.bea.core.diagn
ostics.accessor_1.5.0.0.jar ../com.bea.core.diagnostics.accessor.bind
ing_1.5.0.0.jar ../ ../ ../com.bea.core.ejbgen_1.1
.0.2.jar ../org.apache.ant_1.7.1/lib/ant-all.jar ../com.bea.core.repa
ckaged.apache.bcel_5.2.1.0.jar ../com.bea.core.repackaged.jdt_3.5.2.0
.jar ../com.bea.core.apache.commons.collections_3.2.0.jar ../com.bea.
core.apache.commons.lang_2.1.0.jar ../com.bea.core.apache.commons.poo
l_1.3.0.jar ../com.bea.core.apache.commons.io_1.0.0.0_1-4.jar ../com.
bea.core.apache.commons.fileupload_1.0.0.0_1-2-1.jar ../com.bea.core.
apache.dom_1.0.0.0.jar ../com.bea.core.apache.logging_1.0.0.0.jar ../
org.apache.openjpa_1.2.0.0_1-1-1-SNAPSHOT.jar ../com.bea.core.xml.xml
beans_2.1.0.0_2-5-1.jar ../com.bea.core.logging_1.8.0.0.jar ../
a.core.bea.opensaml_1.0.0.0_6-1-0-0.jar ../com.bea.core.bea.opensaml2
_1.0.0.0_6-1-0-0.jar ../com.bea.core.monitoring.harvester.api_2.3.0.0
.jar ../com.bea.core.monitoring.harvester.jmx_2.3.0.0.jar ../com.bea.

As you can see above, this short 16 entry CLASSPATH soon expands to something massive.  In fact, it expanded to 412 unique entries (primarily JAR files, but also directories)!

Let us run some tests:

% ~/bin/jarwhich oracle.jdbc.driver.OracleDriver
/u01/app/oracle/product/Middleware/oracle_common/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar: oracle/jdbc/driver/OracleDriver.class
##Match Found.
/u01/app/oracle/product/Middleware/wlserver_10.3/server/lib/ojdbc6.jar: oracle/jdbc/driver/OracleDriver.class
##Match Found.

% ~/bin/jarwhich javax.mail.internet.MimeUtility
/u01/app/oracle/product/Middleware/modules/javax.mail_1.1.0.0_1-4-1.jar: javax/mail/internet/MimeUtility.class
##Match Found.


And finally, here are the scripts:


# Copyright (c) 2011, Matt Shannon.
#    NAME      jarclasspath

if [ -z "${JAVA_HOME}" ]; then
  printf "\n\nError: Ensure JAVA_HOME environment variable is set.\n"
  exit 1

# Checks error
  if [ ${RESULTCODE} -ne 0 ];then
    exit 1


## note - need to be careful with dollar signs and backslashes!

cat > /tmp/ <<EOF

import java.util.List;
import java.util.ArrayList;

import java.util.jar.Attributes;
import java.util.jar.JarFile;
import java.util.jar.Manifest;

public class JarClasspath
  public static void main(String[] args)
    if (args.length < 0)
      System.err.println("Usage: arg0 = /path/to/jar");

    String sourceJar = args[0];

    List<String> jars = new ArrayList<String>();

      File entry = new File(sourceJar);
      String entryPath = entry.getCanonicalPath();

      if (entry.isDirectory())
      else if (entry.isFile())
        if ((entryPath.toLowerCase().endsWith(".jar") ||
          printManifestClasspathEntries(jars, entry);
    catch (Exception e)

  public static void printManifestClasspathEntries(
    List<String> jars,
    File sourceJar
  ) throws Exception
    File[] entries = getManifestClasspathEntries(sourceJar);
    int length = (entries == null) ? 0 : entries.length;
    for (int i = 0 ; i < length; i++)
      File entry = entries[i];
      if (entry == null)

      String entryPath = entry.getCanonicalPath();

      if (entry.isDirectory())
      else if (entry.isFile())
        if ((entryPath.toLowerCase().endsWith(".jar") || 
          if (! jars.contains(entryPath))
            printManifestClasspathEntries(jars, entry);

  public static File[] getManifestClasspathEntries(File file)
    throws IOException, FileNotFoundException
    File[] results = null;

    InputStream is = null;
      is = new FileInputStream(file);

      JarFile jarfile = new JarFile(file);

      Manifest manifest = jarfile.getManifest();
      if (manifest != null)
        Attributes attributes = manifest.getMainAttributes();
        if (attributes != null)
          String cp = attributes.getValue(Attributes.Name.CLASS_PATH);
          if (cp != null)
            String[] entries = cp.split("\\\\s");
            int length = (entries == null) ? 0 : entries.length;
            if (length > 0)
              results = new File[length];
              for (int i = 0 ; i < length; i++)
                if ((entries[i] == null) || (entries[i].trim().length() == 0))
                File f = new File(file.getParentFile(), entries[i]);
                results[i] = f;
      if (is != null)
        catch (Exception ignore)
        is = null;
    return results;


# Compile tool
${JAVACMPCMD} -classpath ${CP} -d "/tmp" /tmp/

# Process arguments
for jar in "$@"; do
  ${JAVARUNCMD} -classpath ${CP} JarClasspath "${jar}"


#!/usr/bin/perl -w
# Checks if specified class is found in provided jar/directory
use Getopt::Std;

$0 =~ /([^\/]+)$/ ; # Pattern match; 1 or more chars at end of perl script file name (after the last / if present)
$SCRIPT = $1 ; # Contains the subpattern from the first set of parentheses in the last pattern matched

# Get the class argument.

unless (((scalar @ARGV) == 2) &&
(defined ($class = $ARGV[0])) && ($class =~ /^[\w\.\/]+$/) &&
(defined ($jar = $ARGV[1])) && ($jar =~ /^[\w\.\/\-]+$/))
  die("Usage: $SCRIPT [-s] <class> <jar>\nwhere:",
      "\t<class> is the fully-qualified, dot or slash delimited name of a Java class.\n",
      "\t<jar> is the fully-qualified jar file or directory to search.\n",
      "\t-s tells jarcheck that the full class name is not provided\n");

# Get the partial name of the .class file for this class.
$classFile = $class ;
$classFile =~ s/\./\//g ;
$classFile .= ".class" ;

# If the jar is a file, Search it for the class file name.
if ((-f $jar) && (open(ARCHIVE, "unzip -l $jar|")))
  while(defined ($line = <ARCHIVE>))
# escape any $ in the class file name
    $escaped = $classFile;
    $escaped =~ s/\$/\\\$/;

    # \b allows you to perform a "whole words only" search using a regular expression
if($line =~ /\b$escaped\b/)
   print("$jar: ");
   if (defined($opt_s))
     $line =~ /[_\w\/\$]+$classFile/;

# If the jar is infact a directory, see if the .class file is under it.
elsif(-d $jar)
  if (-f "$jar/$classFile")
  elsif (defined($opt_s))
$classFile =~ /[_\w]+\.class/;
$maxdepth = ($jar eq ".")? "-maxdepth 1":"";
$results = `find $jar $maxdepth | grep -w $&`;
if (!$?)
   print $results;



# Copyright (c) 2011, Matt Shannon.
#    NAME      jarwhich

if [[ $# -ne 1 ]]; then
  printf "\nUsage: $0 <class>\n"
  printf "  where <class> is the fully-qualified, dot or slash delimited name of\n"
  printf "  the Java class to locate. e.g. oracle.jdbc.driver.OracleDriver\n"

if [ -z "${CLASSPATH}" ]; then
  printf "\nError: Ensure CLASSPATH environment variable is set.\n"
  exit 1

DIRNAMECMD=`which dirname`
SCRIPT_DIR="`cd \"${SCRIPT_DIR}\" && pwd`"

  "${SCRIPT_DIR}"/jarcheck "$1" "$2"

echo $CLASSPATH | tr ":" "\n" | sort -u | xargs "${SCRIPT_DIR}"/jarclasspath | sort -u > $file

for line in `cat $file`; do
  jarCheck $1 "${line}"
  if [[ $? -eq 0 ]]; then
    printf "##Match Found.\n"
    # exit 0

JAR : MANIFEST.MF Class-Path referencing a directory

Leveraging Java "1.6.0_24" on Windows XP, I performed some quick tests to determine if a JAR's manifest (META-INF/MANIFEST.MF) Class-Path attribute could reference a directory, thereby automatically picking up any contained classes/jars within that directory.

The result... INTERESTING..

My directory tree contents were as follows:


the "test.jar" found in the top level "test" directory contained a single file entry:

the "abc.jar" found in the "lib" directory contained a single file entry; a class named "Testing" :-

public class Testing
  public static void main(String args[])
    System.out.println("found me");

To prove our Testing class can be located, we set test.jar's MANIFEST.MF contents initially to:

Manifest-Version: 1.0
Class-Path: lib/abc.jar
Created-By: 1.6.0_24 (Sun Microsystems Inc.)

(Note, following the Created-By: ... line, there are two newlines.)

Invoking the following java command line, we see the Testing class was successfully triggered:

C:\test>java -cp test.jar Testing
found me

I did some additional testing with relative and absolute paths in the MANIFEST.MF, the results of which were:

works:  Class-Path: ./lib/abc.jar
works:  Class-Path: /C:/test/lib/abc.jar
works:  Class-Path: \C:\test\lib\abc.jar
fails:  Class-Path: C:\test\lib\abc.jar

Next, I altered the MANIFEST.MF contents to:

Manifest-Version: 1.0
Class-Path: lib/
Created-By: 1.6.0_24 (Sun Microsystems Inc.)

and re-issued the java command:

C:\test>java -cp test.jar Testing
Exception in thread "main" java.lang.NoClassDefFoundError: Testing
Caused by: java.lang.ClassNotFoundException: Testing
        at$ Source)
        at Method)
        at Source)
        at java.lang.ClassLoader.loadClass(Unknown Source)
        at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
        at java.lang.ClassLoader.loadClass(Unknown Source)
Could not find the main class: Testing.  Program will exit.

Thus, it appeared jar files in the lib directory would not be automatically included in the classpath.
To be overly thorough, I decided to run some additional tests, the results of which were:

fails:  Class-Path: lib
fails:  Class-Path: lib/
fails:  Class-Path: ./lib
fails:  Class-Path: ./lib/
fails:  Class-Path: \C:\test\lib\
fails:  Class-Path: \C:\test\lib
fails:  Class-Path: /C:/test/lib/
fails:  Class-Path: /C:/test/lib

At this point, I was of the opinion that a directory specified as part of a manifest Class-Path attribute/directive would simply be ignored.


I decided to extract (and subsequently delete) abc.jar.  The contents of my directory tree were thus:


I set the MANIFEST.MF contents to:

Manifest-Version: 1.0
Class-Path: lib/
Created-By: 1.6.0_24 (Sun Microsystems Inc.)

and re-issued the java command:

C:\test>java -cp test.jar Testing
found me

SUCCESS. It had located the class. What was even more interesting came out in the subsequent tests I performed:

fails:  Class-Path: lib
fails:  Class-Path: ./lib
fails:  Class-Path: \C:\test\lib
fails:  Class-Path: /C:/test/lib
works:  Class-Path: lib/
works:  Class-Path: ./lib/
works:  Class-Path: \C:\test\lib\
works:  Class-Path: /C:/test/lib/

Hence, if an explicit directory name is provided as part of the Class-Path attribute, it must have a trailing slash in order to be recognized!

If a directory entry ends with "."  or ".."  no trailing slash is required.  For example, if  test.jar was moved to /test/lib, and had its MANIFEST.MF Class-Path set to ../..  , then Testing.class would be found if it resided in "/".

As a final test, and just to be doubly certain that relative paths specified in a MANIFEST.MF are in no way influenced by the java invoking end-user's working directory, I ran the following:

(with MANIFEST.MF Class-Path set to lib/  and lib containing Testing.class)

C:\>java -cp test\test.jar Testing
found me

Monday, March 21, 2011

Malware exploiting Java Plug-In 1.6.0_22

I learned a nasty lesson yesterday about the dangers of not having the very latest versions of browser plug-ins installed.  The irony is that I work for the company that developed the Plug-in. I had installed the Sun (Oracle) JDK / JRE Standard Edition 6 Update 22 which was released only back in October 2010. This includes a Java Plug-In that will allow applets to be invoked from the browser.

Running the current latest version of Firefox (3.6.15) on Windows XP, I opened the top few sites returned from a Google search for UFC 128 title fight video in separate tabs. I was immediately called away to tend to my son, and returned to find my wife in front of the laptop with a multitude of Internet Explorer windows popped-up. I proceeded to chastise her and give her the ‘WTF are you doing’ / ‘What have you done’ spiel – only to be told she hasn’t pressed a key/or button.



I immediately ripped the network cable from the back of the computer, and proceeded to start Task Manager along with a program called TaskInfo (by Iarsn) – and kill any and all processes that didn’t look right. Unfortunately I had recently got a bit lazy/blaze about what runs on my laptop and hadn’t taken too much notice regarding any new drivers/services that were installed (i.e. HP Universal Print Driver / Canon Scanner / OpenVPN network adapter etc).  So I was struggling to work out what was legit and what was not.

In the few short minutes I was gone from the computer, the Malware well and truly set its hooks in installing crap such as Offerbox and various browser plug-ins/extensions. My AVG Anti-Virus Free Edition version 2011 unfortunately caught very little of this malware that was being installed.

Fortunately I also had installed on my machine two additional free programs.  The first being CCleaner, and the second, a very old version of Unlocker developed by Cedrick Collomb.  Unlocker is an extremely useful tool that is capable of releasing various locks on files that are being held by system and application processes. Using CCleaner, I was able to see *some* of the additional startup programs that had been added. Using Unlocker, I was able to delete a lot of malware running, and also delete the “C:\Program Files\Java” directory as well. After removing some of the malware, I ran CCleaner’s Registry Cleaner tool which provided me some locations in the registry that were invalid (pointing to missing shared DLLS/applications paths broken etc).  Those entries identified by CCLeaner which corresponded to Malware, gave me a starting point for my manual registry “cleaning”.

I also stumbled in to the Windows “Prefetch” directory that gave me a basic timeline of the crap that got installed on my machine when I was away, and also some other programs to try and identify and delete.


I cleaned and deleted as much as I could, and performed my first reboot (keeping the network cable disconnected).  Firing up TaskInfo after the machine started, I could see some rundll32.exe processes that had spawned that were pointing to some weird DLL files in existing legitimate directories.  I could see a file named “lusrmgrk.dll” that was both being invoked from the “C:\Windows” and “C:\Program Files\Pidgin” directory. I opened those directories  and set the Folder View Model to detailed and enabled display of both “Date Modified” and “Date Created” columns.  Sure enough, those files had been created/modified at exactly the time of this malware travesty. I deleted these files using Unlocker’s assitance.  There were other strange files in the Windows directory with similar date/times that were listed in the Prefetch folder at around the time of the Malware installation;  So those got deleted as well.  I initially set about running some MD5 checksums on the files to see if anyone had a similar virus and reported the checksum, but didn’t get any hits.  For example, the MD5 of “Lfeboa.exe”, which was 137216 bytes, was “014303FB5CF4F2F8A2EADD5EDD82427B”.

Up until this stage, I had not reconnected the network cable on the infected laptop.  The Apple iPad was getting used to search google for md5 checksums and read various malware removal articles.  It was at this point, I really hated Apple and what it stood for. Steve Jobs and/or the marketing geniuses at Apple decided the iPad would not need any external storage support (micro SD etc). I can only speculate this is purely self-motivated so that they can get the Apple fanboys to continually upgrade to a newer device with more capacity. What this meant though, is that I had no way of getting a file from the iPad to my infected computer without establishing a network connection (or jail-breaking the device and buying a camera kit). Thankyou Apple.

So the network connection on the PC had to be turned on (very very briefly).  I downloaded two apps:

TDSSKiller (an anti-rootkit utility from Kaspersky Lab:


Malwarebytes Anti-Malware ( )

Malwarebytes needed to be briefly connected to the internet so that it could update its database, but there is a workaround in future for this:  See issue #4:

Having downloaded and installed these, the network connection was yanked and I let the programs do their jobs. Both apps found remaining malware on disk and were able to remove most* traces. I initially did the quick scan with Malwarebytes, but eventually did the full scan which detected an additional malware file.

Upon reboot, I fired up the taskinfo application, but could still see two rundll32.exe processes.  However the processes were attempting to load DLLs that were not present on disk.  I decided to run the Windows Malicious Software removal tool, but it did not detect anything.

My concern was that there was still some malware on disk that was causing the processes to be initiated.

Taskinfo also provides an option to view the parent process ID of a task.  The parent process IDs  for the rundll32 processes were pointing to a svchost process.  And in particular, “C:\WINDOWS\System32\svchost.exe -k netsvcs”.

See the following article for a description of svchost:

Basically, Svchost is responsible for starting services at startup. If you were to view task manager, you would see there are a number of svchost processes, each of which is starting services from a particular service group.  The “netsvcs” group encompasses some 20 or so services:



One of the services in the group was ultimately triggering the rundll32 processes.  I just wasn’t sure how.  It could be that a malware service had been added; or maybe an existing legitimate service had been compromised; or maybe an existing service had a dependency on some other service that was compromised etc.

Unfortunately I could not find an easy way to work out which service was triggering the rundll32 processes.  So I decided to basically stop them in blocks and restart the computer and try and isolate which service was somehow responsible.

Bingo; Task Scheduler was somehow a part of this! I had read up about Malware creating and scheduling tasks to spawn their evil crap.   I had looked in the C:\Windows\Tasks directory, but it was empty. This was from Windows Explorer with all options set to show hidden files.  So I was concerned that the Task Scheduler process itself maybe compromised, or a dependant service (Remote Procedure Call).

I decided to do one final check from command prompt supplying the “/ah” option to the “dir” command. Wow; there were some .job files in the directory.


Using the “Xcacls.vbs” tool, I was able to give my local administrator user “full control” on the two .job files. Having done this, I was able to change the file permissions so they were no longer hidden etc.

At this point, the tasks became visible from Windows Explorer, and they could be deleted:




A quick reboot, and the processes were no more. 

I’ll never know If I’ve removed all the Malware; But one thing is for sure, I’m going to be much more anal when it comes to Browser plug-ins.

And … if you are currently running JDK 1.6.0_23 or older, make sure you upgrade!!!!!!

Wednesday, March 16, 2011

Tomato – Adding Custom Packages - NVRAM vs JFFS vs USB Stick

I made the mistake last night of visiting the Tomato forum using my new (but actually second-hand) generation 1 iPad. It was late, and I desperately needed the sleep, but what the hell. Unfortunately I came across a forum thread titled My utilities web site revived.  Some two and a bit hours later (and well into the next day), I put the iPad down having read the thread from start to finish.  I gleaned so much useful and interesting bits of information from that that thread that I felt compelled to write a blogpost. This was mostly for my own future reference, but also to share the knowledge.

All I can say is that ‘rhester72’ is a porting/compilation stud and by the looks of it one generous and smart dude.  Tomato community are very lucky to have him. 

Me personally – well I’m one of the 4 billion java developer drones. I haven’t actively coded in C/C++ for a number of years. I miss these languages; mostly I miss  the forgotten skills to properly harness and control these languages – compiler directives / linking /make files / best practices etc.   My foray into iPad application development will at least see me leveraging C once more.

Anyway, returning to blog topic …

rhester72 has gone to the trouble/effort of compiling a number of very useful packages/applications for the Tomato environment (MIPS processor series / running 2.6 Linux kernel/and to a lesser extent 2.4 kernel). These packages add some nice bells and whistles that may come in very handy. For example,  the torrent client (transmission) means you can potentially turn off your power-sucking gaming box and leave it to your ~10 watt ASUS RT-N16 :)

The binaries come in two styles:

  1. static linked
  2. dynamic linked

Static linked binaries are generally much bigger in size, as the dependent libraries are linked/packaged directly in the resulting binary. It is not however always possible to produce a static binary for certain types of packages due to their architecture.

Dynamic linked binaries on the other hard should be smaller, as they are linked at runtime to dependent shared libraries.  The issue with these style of binaries is actually finding the the shared libraries themself at runtime:

  • the shared library may actually be missing from the machine, or not available in the shared library search path (meaning the binary cannot be run)
  • an incompatible version of the shared library may be located in the shared library search path (causing some type of conflict resulting in the binary not behaving correctly)

Dynamic linked binaries can however reduce memory footprint and save valuable space should multiple packages that you plan on running leverage same versions of specific shared libraries.

Space and memory permitting, it is often simpler to take the static binary.

Check out rhester72’s list of packages at the following URL:

Be sure to view the descriptions, notes, and most importantly the readme.

When using dynamically linked binaries, it is not always obvious what the required shared library dependences are. Fortunately, the “ldd” command can be used to help out.

root@asus:/tmp# wget
root@asus:/tmp# chmod u+x atop
root@asus:/tmp# ldd atop => not found => /lib/ (0x2aabf000) => not found => /lib/ (0x2aadd000) => /lib/ (0x2aafc000) => /lib/ (0x2aaa8000)


Above, you can see that and are missing.  Lets rectify this …


root@asus:/tmp# wget

root@asus:/tmp# wget

root@asus:/tmp# ldd atop => not found => /lib/ (0x2aabf000) => not found => /lib/ (0x2aadd000) => /lib/ (0x2aafc000) => /lib/ (0x2aaa8000)

This still did not work; Why?

The answer is the shared library search path.

In Linux, Shared Libraries are searched (in order) from the following locations until a match is located:

  1. LD_LIBRARY_PATH environment variable (if set)
  2. A specific rpath location encoded in to the dynamically linked ELF binary (or shared library) at compilation time (if set)
  3. System default paths defined in /etc/

On my specific stock tomato instance, LD_LIBRARY_PATH is not set. 

Using the readelf command on the atop binary, I can see the Library rpath is set to /opt/lib:/opt/usr/lib.


The contents of on my instance are:

root@asus:/tmp# cat /etc/


Thus, I need to modify the instance so that the and libraries can be found.  For the time being, I’m going to manually set LD_LIBRARY_PATH to the /tmp directory which is where I downloaded the files in the first place:

root@asus:/tmp# export LD_LIBRARY_PATH=/tmp

root@asus:/tmp# ldd atop => /tmp/ (0x2aabf000) => /lib/ (0x2ab12000) => /tmp/ (0x2ab30000) => /lib/ (0x2ab53000) => /lib/ (0x2ab72000) => /lib/ (0x2aaa8000)

As you can see above, the dependent libraries are now found, and we can at least attempt to invoke the atop binary!

Now that you know how to add the packages, the question is where to install them:

  • Recall that the /tmp directory on the Tomato router is volatile; It is essentially a ramdisk created in available RAM of the router, and is blown away upon reboot.
  • Non-volatile available/free flash memory on the router, if sufficient, can potentially be leveraged with important caveats.
  • Another recommended choice (if available with your router) it to leverage an external USB stick/drive
  • Possibly you may be able to even utilize network attached storage.

Flash memory is obviously very convenient, but it comes in a variety of sizes. The Linksys WRT54G series came in 2MB, 4MB and 8MB varieties. It is hard enough getting a distribution like Tomato on a router with such little flash memory, let alone using it for custom packages and the like.

The ASUS RT-N16 on the other hand has 32 MB of flash. Even with the full-blown VPN 2.6 Tomato bundle installed, there is still some 25MB available for potential JFFS2 use. There is also a special area in the flash memory known as the NVRAM segment. This is essentially the very last segment in the flash memory and is at minimum one “erase block” in size. Although the erase block size is typically 64KB (or 128KB as is the case with the RT-N16), the actual NVRAM is programmatically restricted to 32KB.

Tomato does support a special “nvram setfile2nvram” command that will allow you to store a very small file in any remaining NVRAM space (such that, within the 32KB not the full erase block segment size i.e. 128KB on the RT-N16).  However, this option should be leveraged as one of the last resorts. If you have free flash memory available, then JFFS2 is a better option.  Even better again is to leverage a USB stick if your router supports it. 

The reasons for this is best explained by OpenWRT developer/CoFounder MBM’s post at the following location:

To quote him directly:

“The flash chip is broken up into sections called erase blocks. On a 4M chip it's usually 64k and on an 8M chip it's 128k. Each erase block is rated at about 100,000-1,000,000 erase/write cycles depending on vendor.

This just means that on a 4M chip you have to have to erase 64k and rewrite it even if you only want to change one byte of the 64k. After that 64k block has been erased 100,000 times you risk failure where it won't store the data properly.

The problem with the NVRAM implementation is that it's exactly one erase block at the very end of the flash. When you boot, the NVRAM data is copied to a buffer in ram; with the exception of "nvram commit", all the nvram commands are using the copy in ram. When you do an "nvram commit" it writes the contents of ram to the flash. So, when you have a chip rated for 100,000 cycles, you'll probably have a failure around the 100,000th "nvram commit".

Although leveraging JFFS2 will still result in wear due to erase cycles, this file system is specifically designed with flash devices in mind.  It is engineered is such a way to make “wear-levelling more even and prevent erasures from being too concentrated”.

Once again though MBM  makes an important point:

Suppose we have a jffs2 filesystem with two types of files, files that never change and files that change frequently. Common sense says that the erase blocks containing the files that never change or contain free space will only be written once and will remain untouched while the blocks containing the other files will change frequently; wear leveling means that all of the blocks within the jffs2 filesystem will be used equally, so all of the blocks in the above example would be written to equally even if it means moving data that hasn't changed

Thus, if you anticipate files being updated /writes occurring on the JFFS2 partition at a considerable rate, you are eventually going to ruin your flash chip.  In such scenarios you must absolutely use something like USB.  You must also be very careful that any custom packages you install are not repeatedly writing (e.g. log messages) to a location that is stored in the JFFS2 partition.

tomato jffs2

Here is an ugly script I wrote designed for Tomato 1.28 to get details of flash memory allocation/distribution; Note that I have enabled JFFS2 support on my router from the Tomato UI (Administration > JFFS page):

cat > /tmp/ <<EOF

# Tomato 1.28 Flash Info Script by Matt Shannon

echo "Router model: " \`nvram get t_model_name\`
uname -a
echo ""

dmesg | grep nvram
dmesg | grep jffs2
echo ""

cat /proc/mtd
echo ""

ERASEBLOCKSIZE=0x\`cat /proc/mtd | grep nvram | cut -f3 -d " "\`
echo "Erase Block size is" \`awk 'BEGIN{printf("%d", '\$ERASEBLOCKSIZE' / 1024)}\` kilobytes
echo ""

NVRAMSEGMENT=0x\`cat /proc/mtd | grep nvram | cut -f2 -d " "\`
echo "NVRAM Full Segment size is" \`awk 'BEGIN{printf("%d", '\$NVRAMSEGMENT' / 1024)}\` kilobytes

NVRAMSUMMARY=\`nvram show | tail -1\`
NVRAMUSED=\`echo \$NVRAMSUMMARY | cut -d "," -f2 | cut -d " " -f2\`
NVRAMFREE=\`echo \$NVRAMSUMMARY | cut -d "," -f3 | cut -d " " -f2\`
echo "NVRAM Actual Size Available to firmware is" \`awk 'BEGIN{printf("%d", ('\$NVRAMUSED' + '\$NVRAMFREE') / 1024)}\` kilobytes
echo "NVRAM Summary: \$NVRAMSUMMARY"
echo ""

JFFSSIZE=0x\`cat /proc/mtd | grep jffs2 | cut -f2 -d " "\`
echo "JFFS2 size is" \`awk 'BEGIN{printf("%d", '\$JFFSSIZE' / 1048576)}\` megabytes
echo ""

chmod 744 /tmp/


sample output:

Router model:  Asus RT-N16
Linux asus #8 Tue Nov 30 14:58:27 EST 2010 mips GNU/Linux

0x01fe0000-0x02000000 : "nvram"
0x006e0000-0x01fe0000 : "jffs2"

dev:    size   erasesize  name
mtd0: 00040000 00020000 "pmon"
mtd1: 01fa0000 00020000 "linux"
mtd2: 005aec00 00020000 "rootfs"
mtd3: 01900000 00020000 "jffs2"
mtd4: 00020000 00020000 "nvram"

Erase Block size is 128 kilobytes

NVRAM Full Segment size is 128 kilobytes
NVRAM Actual Size Available to firmware is 32 kilobytes
NVRAM Summary: 852 entries, 21155 bytes used, 11613 bytes free.

JFFS2 size is 25 megabytes


Recall from above that rhester72’s dynamically linked binary atop hardcodes an rpath to /opt/lib:/opt/usr/lib. If you were to SSH in to your router however, you will likely find that the /opt directory is empty.  Rhester72 leverages an init script (Tomato UI > Administration > Scripts > Init)  to attempt to automatically bind the /opt location to a directory “opt” found in the /jffs mount location. (Note, when enabling JFFS2 in Tomato, the JFFS partition is automatically mounted at startup under the /jffs directory). Here is his script which will try for 30 seconds to bind the opt directory:

while [[ ! -d /jffs/opt && $t -lt 30 ]];do
sleep 1

if [ -d /jffs/opt ];then
mount -o bind /jffs/opt /opt
logger -t /jffs -p err did not mount within $t seconds

If the bind fails, a message will be written to /var/log/messages.  To utilize such an approach you will need to do the following:

  1. enable JFFS2
  2. once the jffs partition is mounted, mkdir /jffs/opt
  3. make appropriate subdirectories mkdir -p /jffs/opt/bin /jffs/opt/lib /jffs/opt/sbin /jffs/opt/usr/bin /jffs/opt/usr/lib /jffs/opt/usr/sbin /jffs/opt/usr/share

Refer to his readme.

Leveraging the above directory structure to store binaries and shared libraries should give you good results (remember /opt will be bound to /jffs/opt):

root@asus:/tmp/home/root# echo $PATH | tr ":" "\n"

root@asus:/tmp/home/root# cat /etc/ | grep /opt

As a final note, be careful with setting LD_LIBRARY_PATH in any profile.  Also it is worth inspecting the /etc/profile script, you will see that it attempts to source both /jffs/etc/profile and /opt/etc/profile (if they exist).


Thanks again to rhester72;

Monday, March 14, 2011

Quick & simple VPN setup guide: using OpenVPN on a ‘Tomato’ router

Before the advent of custom firmware on consumer-priced/graded routers (Linksys/Netgear etc), obtaining secure remote access to a network resource using stock firmware was somewhat an art.  Circa 2002, I remember geeking it up in front of my work colleagues by remotely accessing my home machine’s desktop to download mp3 over Napster. Back then, I was using a combination of port-forwarding, VNC, and SSH; amazingly this setup was able to be tunnelled through work’s HTTP proxy server.

Even today, many people still leverage such approaches.  The following links give a bit of a history and overview:

Modern custom router firmwares (DD-WRT / Tomato / etc) make it even simpler by bundling some of the required software.  Here is a link that describes an updated approach to the above:


A friend whom runs a small business on a tight technology budget recently asked me what I would suggest that would allow his employees remote access to the office “files”.  He and his colleagues are regularly travelling for days at a time and require access both at customer sites, and also from hotels.  Most of the sites he visits do allow him to connect his laptop to the customer network; but not all. Thus, we needed a solution that would wherever possible allow him to leverage the customer’s office internet connection. For those sites that prevent such access, he would leverage a 3G solution using either a USB dongle, or 3G Wi-Fi Hotspot.

Immediately, OpenVPN sprung to mind as a possible solution – mostly cause it was free, and also had client support on Windows, Mac, and Linux. The icing on the cake however was the fact that the DD-WRT and Tomato router firmwares happened to provide special VPN builds that bundled the OpenVPN server application.  Thus, assuming we could locate a router with sufficient grunt that could run such a firmware, there would be no need to have a dedicated separate OpenVPN server machine.

Basically, three main routers sprung to mind:

Asus RT-N16

Linksys (Cisco) E3000

Netgear WNDR3700

The first two routers (Asus & Linksys) are supported by Tomato.  All three routers are supported by DD-WRT.

I’ve had great success with Tomato in the past, so I suggested to my friend to purchase the Linksys E3000. This can be purchased here in Australia for about $170.

It just so happened that I had an Asus RT-N16 at home, hence I was able to test out the VPN configuration on my home environment, before messing with my friends office network. I liked it so much (having a VPN), that I decided to keep it and refine it!

The main decisions when leveraging OpenVPN are:

1) Does your router have a static WAN IP address?

If you router does not have a static IP address, then you are at the mercy of your internet provider whom may or may not reissue you the same IP address upon lease expiration/reboot etc. It is best to play it safe and configure your router to use a Dynamic DNS. Sign up for a free account at an appropriate provider (e.g., and then configure your router with the appropriate DDNS account details.  Your router will subsequently register your dynamic IP address with the provider whenever your WAN connection comes up/changes. This way, you can configure your OpenVPN client configuration with a static hostname (e.g.

2) Do I TUN (network TUNnel) or TAP (network tap)?

TAP > runs at layer 2 (OSI model)  - bridge mode ; effectively the VPN client will appear to be on the same network subnet as the destination. Packets broadcast on destination network will be received by client.

TUN > runs at layer 3 (OSI model) – router mode; effectively the VPN client is on a different network to the destination; routing rules are used to allow the client to access the destination network.  Packets broadcast on destination network won’t be received by VPN client.

3) What protocol: TCP or UDP?

SgtPepper (Tomato guru) over at the LinksysInfo forum clarified/corrected my understanding of what this option means. Initially I assumed incorrectly that the choice between TCP and UDP came down to what application layer network protocols you planned on using over the VPN.  Such that, if you wanted FTP/Telnet/SSH HTTP/HTTPS/SMTP/IMAP/SMB-over-TCP/NFS-over-TCP/ you chose TCP, whereas VoIP and Network Games, then UDP.  WRONG!

To Quote SgtPepper directly “Choosing TCP vs UDP should have nothing to do with the application type you're using. OpenVPN tunnels TCP and UDP traffic over whichever protocol you choose. Tunneling TCP over TCP is extremely inefficient, so TCP should only be chosen if you absolutely have to. That should only be if you have to go through an HTTP proxy, trick a firewall, or have a very flaky connection and have problems with UDP. If you have the option, you should absolutely 100% use UDP.

4) What port for the server to listen on?

The default port is 1194. But if 443 is available, I would suggest using this. The use of port 443 is particularly pertinent to TCP-based OpenVPN configurations whereby client access to the VPN may require tunnelling through a proxy server. Proxy servers are much more likely to accept traffic destined for port 443 versus something like 1194. Port 443 is the default port used by secure HTTP (aka HTTPS), and as such most proxy servers should allow it unimpeded.

5) What LAN subnet/segment should the router be leveraging?

Most routers ship with a default IP address of, and subnet mask Chances are, one of the destinations you visit will also be leveraging such a topology. The problem occurs when the destination network is the same as the actual client network you are connecting from. For example, if my network at home is 192.168.1.x, and I’m at a client site that is using 192.168.1.x, and I attempt to make VPN connection to home, the routing tables get completely messed up. There are probably some crazy network mask and routing rule/metric options you can leverage to work around such a situation, but my advice is to avoid it in the first place.



For my home VPN, I decided to leverage TAP (interface type), TCP (protocol), 443 (port), and (CIDR Notation).

The reason I chose TAP is to theoretically support broadcast traffic, but mostly so as to appear on the same network as my destination. If your client has no business to receive the broadcast traffic, and you are expecting large amounts of broadcast traffic on the destination network subnet, then it is best to switch to TUN.

The reason I chose TCP is that OpenVPN natively supports tunnelling of the TCP traffic over a HTTP proxy. This means that if we are stuck behind a HTTP proxy without direct internet access, we can still hopefully access the VPN by tunnelling through the proxy server. Refer to SgtPepper’s quote from above however regarding efficiency and only choose this option if you have to!

I chose port 443 likewise to increase my odds of a proxy server connection working, and also so that it looks legitimate from an auditing perspective.

I chose 192.168.192.x/ network for my LAN, as this network has less chance of being leveraged on a client site (unlike for example 192.168.1.x).

One interesting point to make is that Tomato natively supports two OpenVPN server processes running at the same time.  Thus there is nothing stopping you running TCP and TAP for one instance, and UDP with TUN/TAP on the other instance.


Now… on to the VPN setup:

If you are running a Linksys E3000, I suggest obtaining and installing the following firmware (or newer):


the above firmware can be installed directly over the top of the factory firmware.


If you are running an Asus RT-N16, I suggest obtaining and installing the following firmware (or newer):


Note: With the RT-N16, to upgrade from a stock firmware you must first flash the router with the following:


Next, you must download and install the OpenVPN software specific for your OS (client)

For Windows, this is currently:

As we only need the client components and RSA certificate management scripts, be sure to DESELECT “OpenVPN Service”.



Next, we will generate the various keypairs…

CD /D "%ProgramFiles%\OpenVPN\easy-rsa"

##   create vars.bat and openssl.cnf from templates

##   Update the values in vars.bat as appropriate, specifically:
##   If requiring stronger keys, change KEY_SIZE from 1024 to 2048
##   Use WordPad to edit the file, as it leverages UNIX line breaks
"%ProgramFiles%\Windows NT\Accessories\wordpad.exe" "%ProgramFiles%\OpenVPN\easy-rsa\vars.bat"


##   Invoke vars.bat to set environment

##   CAUTION - deletes any existing keys / reset SERIAL and INDEX.TXT

##   Construct Certificate Authority keypair, set commonName to be CA


##   Construct server keypair, set commonName to VPNServer
build-key-server server


##   Construct client keypair for user 'matt', set commonName as appropriate (e.g. matt)
build-key client_matt



##   Construct client keypair for user 'louise', set commonName as appropriate (e.g. louise)
## build-key client_louise

##   NOTE - if you have specified a challenge password in your certificate when you created it,
##   you will be required to provide that password should you ever want to request to revoke the certificate

##   The keys directory will contain private keys / certificates / index file stating
##   issued certificates / serial file containing next serial number to leverage etc.
##   !!! KEEP THEM IN A SAFE PLACE. !!!!


##   Generate Diffie Hellman parameters
##   This will generate dh1024.pem in keys folder (or dh2048.pem files, depending on KEY-SIZE variable)



Next, we will configure Tomato…

From the Tomato Administration UI, choose “VPN Tunnelling” > "Server”

Refer to screenshots for detailed settings.

Basic Settings:
Start with WAN: <checked>
Interface Type: TAP
Protocol: TCP
Port: 443
Firewall: Automatic
Authorization Mode: TLS
Extra HMAC authorization (tls-auth): Disabled
Client address pool: DHCP <checked>


Advanced Settings:
Poll Interval: 0
Direct clients to redirect internet traffic: <NOT checked>
Respond to DNS: <NOT checked>
Encryption cipher: Use Default
Compression: Adaptive
TLS Renegotiation Time: -1
Manage Client-Specific Options: <checked>
Allow Client<->Client: <checked>
Allow Only These Clients: <NOT checked>

Custom Configuration**:
script-security 3
auth-user-pass-verify /etc/ via-env


Certificate Authority: <paste contents of ca.crt>
Server Certificate: <paste contents of server.crt from -----BEGIN CERTIFICATE----- through-----END CERTIFICATE----- inclusive>
Server Key: <paste contents of server.key>
Diffie Hellman parameters: <paste contents of dh1024.pem>


All the above OpenVPN configuration options ultimately get stored in NVRAM. Tomato will dynamically generate the appropriate configuration file and keys/certificates in the “/tmp/etc/openvpn”  area based on these NVRAM values.

If you were to SSH/Telnet in to the router, and issue an “nvram show | grep vpn_server1” command, you should see the various configuration values stored in NVRAM from above.

The dynamically constructed files are as follows:

/tmp/etc/openvpn/server1/ [ca.crt | config.ovpn | dh.pem | server.crt | server.key]

The file contains the iptables entries.

tomato generated vpn files

** You will notice the “Advanced” tab > “Custom Configuration” option for the Server VPN Tunnelling is populated with a script-security and auth-user-pass-verify entry.  These options are adding an additional level of security by requiring the VPN client not only hold a valid keypair, but also present a valid username/password.  The client supplied username/password is provided to a custom script (that we must create) named  This script must return exit status 0 in order for the VPN client connection to be successful (assuming they had a valid keypair in the first place).

Unfortunately the /tmp folder is erased and recreated every time the router is rebooted. (/etc is a symbolic link to /tmp/etc).  We need a mechanism to ensure shell scripts for custom authentication survive reboot. There are three options:

1) enable the JFFS feature and essentially leverage the unused portion of the router's NVRAM and turn it in to a mountable and writable space 

2) leverage init scripts in the tomato UI to recreate the various shell scripts required in the /tmp directory at boot time.

3) use the "nvram setfile2nvram <filename>" command to save small files in nvram.  The files will be automatically restored on start-up.

We will leverage the latter option (#3) for our custom authentication script; SSH/telnet in to the router as root and issue the following:

cd /etc

cat > /etc/ <<EOF

HASHPASS=\`echo -n "\$1\$2" | md5sum | sed s'/\  -//'\`
while [ \$i -lt 10 ]; do
  HASHPASS=\`echo -n \$HASHPASS\$HASHPASS | md5sum | sed s'/\  -//'\`
  i=\`expr \$i + 1\`
echo \$1:\$HASHPASS
exit 1

chmod 755 /etc/

nvram setfile2nvram /etc/

cat > /etc/ <<EOF
# echo "\${username}"
hash=\`/etc/ "\${username}" "\${password}"\`

USERS=\`cat /etc/vpnusers\`
for u in \$USERS; do
  test "\${hash}" == "\${u}" && exit 0
exit 1

chmod 755 /etc/

nvram setfile2nvram /etc/

To generate password hashes for the users (first argument is username, second argument is password):

/etc/ matt test1234 >> /etc/vpnusers

Persist the vpnusers file:

nvram setfile2nvram /etc/vpnusers


To test the verify script:

export username=matt
export password=test1234
echo $?

If the output is 0 from the above command, the matt/test1234 credential was found in the vpnusers file.  If the output is 1, something is broken!

The hard part is now done. Reboot the router!


The final task is creation of the client configuration file and adding the client keypair and CA:

1) Make a directory "config" under "%ProgramFiles%\OpenVPN" if not already present.

2) Within the "config" directory, make a subdirectory, e.g "homevpn"

3) Copy to the "homevpn" directory ca.crt, and the appropriate client keypair (e.g. client_matt.key / client_matt.crt).

4) Within the "config" directory, create an openvpn client config file, e.g. "homevpn.ovpn"; The contents of "homevpn.ovpn" based on the above server configuration above are as follows:

# The hostname/IP and port of the server. You can have multiple remote entries to load balance between the servers.
remote 443

# Specify that we are a client and that we will be pulling certain config file directives from the server.

ns-cert-type server

# On most systems, the VPN will not function unless you partially or fully disable the firewall for the TUN/TAP interface.
dev tap21

# Are we connecting to a TCP or UDP server?
proto tcp

# Keep trying indefinitely to resolve the host name of the OpenVPN server.  Useful for machines which are not permanently connected to the internet such as laptops.
resolv-retry infinite

# Most clients don't need to bind to a specific local port number.

# Try to preserve some state across restarts.

# --float tells OpenVPN to accept authenticated packets from any address, not only the address which was specified in the --remote option.
# Useful if you're using round-robin DNS.  Also useful if your server has a dynamic IP address which the ISP could change.
# I use float so I can connect from inside AND outside my router.

# If the pushed routes appear not to be added on windows hosts, add the following:
# route-delay 30

# SSL/TLS parms.
ca "C:\\Program Files\\OpenVPN\\config\\homevpn\\ca.crt"
cert "C:\\Program Files\\OpenVPN\\config\\homevpn\\client_matt.crt"
key "C:\\Program Files\\OpenVPN\\config\\homevpn\\client_matt.key"

# Enable compression on the VPN link.
# Don't enable this unless it is also
# enabled in the server config file.

# Set log file verbosity.
verb 3

# Silence repeating messages
mute 20

# prompt for username and password

You should hopefully now be able to establish a VPN connection!  Good luck.