Search this blog ...

Friday, July 10, 2015

Java SSL HttpUrlConnection Performance Slow using TLS 1.0 with CBC

The fix Oracle implemented in the JVM to combat the BEAST attack can have a significant performance impact when using TLS 1.0 with CBC.  This is particularly noticeable when performing large streaming uploads with HttpURLConnection using the setFixedLength streaming mode (rather than its default mode where it buffers the request payload in full).

When performing writes to HttpURLConnection's OutputStream in setFixedLength streaming mode using a BufferedOutputStream based on the default 8k buffer [OutputStream out = new BufferedOutputStream(uc.getOutputStream())], you can see a pattern like that below when running with the system property,handshake set.

Java 6 1.6.0_91
%% Cached client session: [Session-1, TLS_RSA_WITH_AES_128_CBC_SHA]
main, WRITE: TLSv1 Application Data, length = 32
main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 32
main, WRITE: TLSv1 Application Data, length = 16416

Java 7 1.7.0_15
%% Cached client session: [Session-1, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
main, WRITE: TLSv1 Application Data, length = 32
main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 32
main, WRITE: TLSv1 Application Data, length = 16416

When using Java 8 and TLS 1.2, there are none of the 32 byte packets in the output …

Java 8 1.8.0_40
%% Cached client session: [Session-1, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
main, WRITE: TLSv1.2 Application Data, length = 16432

main, WRITE: TLSv1.2 Application Data, length = 16432
main, WRITE: TLSv1.2 Application Data, length = 16432
main, WRITE: TLSv1.2 Application Data, length = 16432

If I set the system property "-Djsse.enableCBCProtection=false" with Java 6 (disabling the BEAST attack fix), the 32 byte packets disappear ...

%% Cached client session: [Session-1, TLS_RSA_WITH_AES_128_CBC_SHA]
main, WRITE: TLSv1 Application Data, length = 16416

main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 16416

As disabling the CBC protection is not viable in production, I looked at what could be done to minimize the occurrence of the 32 byte packets when using TLS 1.0 with CBC.  In turns out by increasing the buffer size of the BufferedOutputStream wrapping HttpURLConnection’s OutputStream from the default 8kb to something much larger e.g. to 256kb, the number of 32 byte packets reduced significantly resulting in a significant performance increase. 

Java 7 1.7.0_15 with 32k buffer
%% Cached client session: [Session-1, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
main, WRITE: TLSv1 Application Data, length = 32
main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 32

main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 16416

The larger buffer however as expected had minimal (or no) impact with Java 1.8 based on the TLS 1.2 connection.  Java 1.7 can support TLS 1.2, though will by default negotiate TLS 1.0 unless explicitly instructed otherwise:

Footnote 1 - Although SunJSSE in the Java SE 7 release supports TLS 1.1 and TLS 1.2, neither version is enabled by default for client connections. Some servers do not implement forward compatibility correctly and refuse to talk to TLS 1.1 or TLS 1.2 clients.

Oracle’s acknowledgement of the BEAST exploit when using TLS 1.0 with CBC (Cipher Block Chaining) is part of CVE-2011-3389:

CVE-2011-3389 Java Runtime Environment SSL/TLS JSSE Yes 4.3 Network Medium None Partial None None JDK and JRE 7, 6 Update 27 and before, 5.0 Update 31 and before, 1.4.2_33 and before.
JRockit R28.1.4 and before

This is a vulnerability in the SSLv3/TLS 1.0 protocol. Exploitation of this vulnerability requires a man-in-the-middle and the attacker needs to be able to inject chosen plaintext.

The links below describe the attack:

To combat the exploit, the fix Oracle did was to split each write() to the underlying OutputStream in to at least two separate TLS records with every record having a different initialization vector.  TLS itself caps the maximum record size at 16384 (this is the size of the raw unencrypted bytes).
So a write of 16k of client data to the underlying OutputStream at a time with the fix above would result in one TLS record containing the first byte encrypted, and the second TLS record containing the remaining 16383 bytes encrypted. Whereas a write of 32k of client data to the underlying OutputStream at a time would result in three TLS records, one containing the first byte encrypted, the second containing the next 16384 bytes, and the third containing the remaining 16383 bytes encrypted.  So when using TLS 1.0 with CBC, the bigger the buffer associated with the write, the fewer one byte encrypted TLS records you are going to see.

To give you an idea of effect that buffer size plays with TLS 1.0 and CBC when the JVM has the fix for BEAST applied:
Assuming a file size of 31527359 (~ 30 Megabytes)
with 16k buffer: 16384 = 1 + 16383 ; 31527359 / 16384 = ~1924 ; so 1924 one byte ssl records, 1924 x 16383 byte ssl records
with 32k buffer: 32768 = 1 + 16384 + 16383; 31527359 / 32768 = ~962 ; so 962 one byte ssl records, 962 x 16384 byte records, and 962 x 16383 byte records
with 64k buffer 65536 = 1 + 16384 + 16384 + 16384 + 16383; 31527359 / 65536 = ~481 ; so 481 one byte ssl records, 3*481*16384 byte records, and 481 x 16383 byte records
with 256k buffer 262144 = 1 + 15*16384 + 16383; 31527359 / 262144 = ~120; so 120 one byte ssl records, 15*120*16384 byte records, and 120 x 16383 byte records
So to summarize for the 30 megabyte file, buffer size and resulting one-byte ssl records
16k: 1924 one byte ssl records
32k: 962 one byte ssl records
64k: 481 one byte ssl records
256k: 120 one byte ssl records
512k: 60 one byte ssl records
1024k buffer: 30 one byte ssl records

Each SSL record obviously has a reasonable amount of processing time, both client to encrypt/hash, network from a TCP perspective, and server to validate/decrypt the SSL payload.
So ideally going forward Java 1.8 using TLS 1.2 is what you want to strive for.   If stuck with TLS 1.0, then the large buffer will definitely help with performance.

Tuesday, May 6, 2014

Java split a large file – sample code – high performance


Sample Java code to split a source file into chunks.

I needed a quick way to split big log files in to manageable chunks that could subsequently be opened with my legacy editor without hitting out-of-memory errors.

I did not trust the available freeware solutions HJSplit / FFSJ etc due to the bad reports indicating potential malware.

So I coded my own using java NIO (Non-Blocking I/O) which provides excellent performance.

Source code follows:


import java.nio.ByteBuffer;

import java.nio.channels.FileChannel;

* Source code to split a file in to chunks using java nio.
* 2014-05-06 mshannon - created.
public class Split
public static void main(String[] args) throws IOException
  long splitSize = 128 * 1048576; // 128 Megabytes file chunks
  int bufferSize = 256 * 1048576; // 256 Megabyte memory buffer for reading source file

  // String source = args[0];
  String source = "/C:/Users/mshannon/Desktop/18597996/UCMTRACE/idccs_UCM_server1_1398902885000.log";

  // String output = args[1];
  String output = "/C:/Users/mshannon/Desktop/18597996/UCMTRACE/idccs_UCM_server1_1398902885000.log.split";

  FileChannel sourceChannel = null;
   sourceChannel = new FileInputStream(source).getChannel();

   ByteBuffer buffer = ByteBuffer.allocateDirect(bufferSize);

   long totalBytesRead = 0; // total bytes read from channel
   long totalBytesWritten = 0; // total bytes written to output

   double numberOfChunks = Math.ceil(sourceChannel.size() / (double) splitSize);
   int padSize = (int) Math.floor(Math.log10(numberOfChunks) + 1);
   String outputFileFormat = "%s.%0" + padSize + "d";

   FileChannel outputChannel = null; // output channel (split file) we are currently writing
   long outputChunkNumber = 0; // the split file / chunk number
   long outputChunkBytesWritten = 0; // number of bytes written to chunk so far

    for (int bytesRead =; bytesRead != -1; bytesRead =
     totalBytesRead += bytesRead;

     System.out.println(String.format("Read %d bytes from channel; total bytes read %d/%d ", bytesRead,
      totalBytesRead, sourceChannel.size()));

     buffer.flip(); // convert the buffer from writing data to buffer from disk to reading mode

     int bytesWrittenFromBuffer = 0; // number of bytes written from buffer

     while (buffer.hasRemaining())
      if (outputChannel == null)
       outputChunkBytesWritten = 0;

       String outputName = String.format(outputFileFormat, output, outputChunkNumber);
       System.out.println(String.format("Creating new output channel %s", outputName));
       outputChannel = new FileOutputStream(outputName).getChannel();

      long chunkBytesFree = (splitSize - outputChunkBytesWritten); // maxmimum free space in chunk
      int bytesToWrite = (int) Math.min(buffer.remaining(), chunkBytesFree); // maximum bytes that should be read from current byte buffer

        "Byte buffer has %d remaining bytes; chunk has %d bytes free; writing up to %d bytes to chunk",
         buffer.remaining(), chunkBytesFree, bytesToWrite));

      buffer.limit(bytesWrittenFromBuffer + bytesToWrite); // set limit in buffer up to where bytes can be read

      int bytesWritten = outputChannel.write(buffer);

      outputChunkBytesWritten += bytesWritten;
      bytesWrittenFromBuffer += bytesWritten;
      totalBytesWritten += bytesWritten;

        "Wrote %d to chunk; %d bytes written to chunk so far; %d bytes written from buffer so far; %d bytes written in total",
         bytesWritten, outputChunkBytesWritten, bytesWrittenFromBuffer, totalBytesWritten));

      buffer.limit(bytesRead); // reset limit

      if (totalBytesWritten == sourceChannel.size())
       System.out.println("Finished writing last chunk");

       outputChannel = null;

      else if (outputChunkBytesWritten == splitSize)
       System.out.println("Chunk at capacity; closing()");

       outputChannel = null;



private static void closeChannel(FileChannel channel)
  if (channel != null)
   catch (Exception ignore)

Thursday, February 13, 2014

Two-way SSL guide: Java, Android, Browser clients and WebLogic Server

The notes below outline the steps I took to test two-way SSL from scratch using updated keytool functionality found in Java 7.  Rather than use a commercial certificate authority like VeriSign (which costs real money), my notes show how to generate your own CA and all PKI artefacts using just the keytool command.  These artefacts can subsequently be utilized for development / testing / private-network scenarios.  Note keytool is simple a CLI / console program shipped with the Java JDK / JRE that wraps underling java security/crypto classes.

If you can follow these steps and understand the process, then transitioning to a commercial trusted certificate authority like VeriSign should be straightforward.

In my previous article I state:

One-way SSL is the mode which most "storefronts" run on the internet so as to be able to accept credit card details and the like without the customer’s details being sent effectively in the clear from a packet-capture perspective.  In this mode, the server must present a valid public certificate to the client, but the client is not required to present a certificate to the server.

With Two-way SSL trust is enhanced by now requiring both the server and the client present valid certificates to each other so as to prove their identity.

From an Oracle WebLogic Server perspective, two-way SSL enables the server to only* accept incoming SSL connections from clients whom can present a public certificate that can be validated based on the contents of the server’s configured trust store.

*Assuming WebLogic Server  is configured with “Client Certs Requested And Enforced” option.

The actual certificate verification process itself is quite detailed and would make a good future blog post. RFC specifications of interest are RFC 5280 (which obsoletes RFC 3280) and RFC 2818 and RFC 6125.

WebLogic server can also be configured to subsequently authenticate the client based on some attribute (such as cn – common name) extracted from the client’s validated X509 certificate by configuring the Default Identity Asserter; this is commonly known as certificate authentication.  This is not mandatory however - Username/password authentication (or any style for that matter) can still be leveraged on top of a two-way SSL connection.

Now let’s get on with it …

Why do we need Java 7 keytool support?  Specifically for signing certificate requests, and also to be able to generate a keypair with custom x509 extension such as SubjectAlternativeName / BasicConstraints etc.

High-level, we need the following:
Custom Certificate Authority
Server Certificate signed by Custom CA
Client Certificate signed by Custom CA

Artifacts required for two-way SSL to support WebLogic server and various clients types (Browser / Java etc):
Server keystore in JKS format
Server truststore in JKS format
Client keystore in JKS format
Client truststore in JKS format
Client keystore in PKCS12 keystore format
Client truststore in PKCS12 format
CA certificate in PEM format

Note:  Browsers and Mobile devices typically want public certificates in PEM format and keypairs (private key/public key) in PKCS12 format.
Java clients on the other hand generally use JKS format keystores

Steps below assume Linux zsh

Constants – edit accordingly

CA_DNAME="CN=CustomCA, OU=MyOrgUnit, O=MyOrg, L=MyTown, ST=MyState, C=MyCountry"


CLIENT_DNAME="CN=mshannon, OU=MyOrgUnit, O=MyOrg, L=MyTown, ST=MyState, C=MyCountry"

Verify version of Java

(/usr/java/jre/jre1.7.0_45)% export JAVA_HOME=`pwd`
(/usr/java/jre/jre1.7.0_45)% export PATH=$JAVA_HOME/bin:$PATH
(/usr/java/jre/jre1.7.0_45)% java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

Create CA, Server and Client keystores

# -keyalg - Algorithm used to generate the public-private key pair - e.g. DSA
# -keysize - Size in bits of the public and private keys
# -sigalg - Algorithm used to sign the certificate - for DSA, this would be SHA1withDSA, for RSA, SHA1withRSA
# -validity - Number of days before the certificate expires
# -ext bc=ca:true - WebLogic/Firefox require X509 v3 CA certificates to have a Basic Constraint extension set with field CA set to TRUE
# without the bc=ca:true , firefox won't allow us to import the CA's certificate.

# look after the CA_P12_KEYSTORE_FILE - it will contain our CA private key and should be locked away!

keytool -genkeypair -v -keystore "$CA_P12_KEYSTORE_FILE" \
-storetype PKCS12 -storepass "$KEYSTORE_PASSWORD" \
-keyalg RSA -keysize 1024 -validity 1825 -alias "$CA_KEY_ALIAS" -keypass "$CA_KEY_PASSWORD" -dname "$CA_DNAME" \
-ext "bc=ca:true"

keytool -genkeypair -v -keystore "$SERVER_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$SERVER_JKS_KEYSTORE_PASSWORD" \
-keyalg RSA -keysize 1024 -validity 1825 -alias "$SERVER_KEY_ALIAS" -keypass "$SERVER_KEY_PASSWORD" -dname "$SERVER_DNAME"

keytool -genkeypair -v -keystore "$CLIENT_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$CLIENT_JKS_KEYSTORE_PASSWORD" \
-keyalg RSA -keysize 1024 -validity 1825 -alias "$CLIENT_KEY_ALIAS" -keypass "$CLIENT_KEY_PASSWORD" -dname "$CLIENT_DNAME"

Export CA certificate

# -rfc - means to output in PEM (rfc style) base64 encoded format, output will look like ----BEGIN.... etc

keytool -exportcert -v -keystore "$CA_P12_KEYSTORE_FILE" \
-storetype PKCS12 -storepass "$KEYSTORE_PASSWORD" \
-alias "$CA_KEY_ALIAS" -file "$CA_CER" -rfc

Generate certificate signing requests for client and server (to be supplied to CA for subsequent signing)

# The public certificates for the client and server keypairs created above are currently self-signed
# (such that, issuer = subject , private key signed its associated public certificate)
# For a production server we want to get our public certificate signed by a valid certificate authority (CA).
# We are going to use the customca we created above to sign these certificates.
# We first need to get a certificate signing request ready ...

keytool -certreq -v -keystore "$SERVER_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$SERVER_JKS_KEYSTORE_PASSWORD" \

keytool -certreq -v -keystore "$CLIENT_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$CLIENT_JKS_KEYSTORE_PASSWORD" \

Sign certificate requests

keytool -gencert -v -keystore "$CA_P12_KEYSTORE_FILE" \
-storetype PKCS12 -storepass "$KEYSTORE_PASSWORD" \
-validity 1825 -alias "$CA_KEY_ALIAS" -keypass "$CA_KEY_PASSWORD" \
-infile "$SERVER_CSR" -outfile "$SERVER_CER" -rfc

keytool -gencert -v -keystore "$CA_P12_KEYSTORE_FILE" \
-storetype PKCS12 -storepass "$KEYSTORE_PASSWORD" \
-validity 1825 -alias "$CA_KEY_ALIAS" -keypass "$CA_KEY_PASSWORD" \
-infile "$CLIENT_CSR" -outfile "$CLIENT_CER" -rfc

Import signed certificates

# Once we complete the signing, the certificate is no longer self-signed, but rather signed by our CA.
# the issuer and owner are now different. 

# Now we are ready to imported the signed public certificates back in to the keystores ...

# keytool prevents us from importing a certificate should it not be able to verify the full signing chain.
# As we leveraged our custom CA, we need to import the CA's public certificate in to our keystore as a
# trusted certificate authority prior to importing our signed certificate.
# otherwise - 'Failed to establish chain from reply' error will occur

keytool -import -v -keystore "$SERVER_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$SERVER_JKS_KEYSTORE_PASSWORD" \
-alias "$CA_KEY_ALIAS" -file "$CA_CER" -noprompt

keytool -import -v -keystore "$CLIENT_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$CLIENT_JKS_KEYSTORE_PASSWORD" \
-alias "$CA_KEY_ALIAS" -file "$CA_CER" -noprompt

keytool -import -v -keystore "$SERVER_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$SERVER_JKS_KEYSTORE_PASSWORD" \
-alias "$SERVER_KEY_ALIAS" -keypass "$SERVER_KEY_PASSWORD" -file "$SERVER_CER" -noprompt

keytool -import -v -keystore "$CLIENT_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$CLIENT_JKS_KEYSTORE_PASSWORD" \
-alias "$CLIENT_KEY_ALIAS" -keypass "$CLIENT_KEY_PASSWORD" -file "$CLIENT_CER" -noprompt

Create trust stores

# for one-way SSL
# the trust keystore for the client needs to include the certificate for the trusted certificate authority that signed the certificate for the server

# for two-way SSL
# the trust keystore for the client needs to include the certificate for the trusted certificate authority that signed the certificate for the server
# the trust keystore for the server needs to include the certificate for the trusted certificate authority that signed the certificate for the client

# for two-way SSL connection, the client verifies the identity of the server and subsequently passes its certificate to the server.
# The server must then validate the client identity before completing the SSL handshake

# given our client and server certificates are both issued by the same CA, the trust stores for both will just contain the custom CA cert

keytool -import -v -keystore "$SERVER_JKS_TRUST_KEYSTORE_FILE" \
-alias "$CA_KEY_ALIAS" -file "$CA_CER" -noprompt

keytool -import -v -keystore "$CLIENT_JKS_TRUST_KEYSTORE_FILE" \
-alias "$CA_KEY_ALIAS" -file "$CA_CER" -noprompt

Create PKCS12 formats for Android / Browser clients

# note - warning will be given that the customca public cert entry cannot be imported
# TrustedCertEntry not supported
# this is expected
# As of JDK 6, standards for storing Trusted Certificates in "pkcs12" have not been established yet

keytool -importkeystore -v \
-srckeystore "$CLIENT_JKS_KEYSTORE_FILE" -srcstoretype JKS -srcstorepass "$CLIENT_JKS_KEYSTORE_PASSWORD" \
-destkeystore "$CLIENT_P12_KEYSTORE_FILE" -deststoretype PKCS12 -deststorepass "$CLIENT_P12_KEYSTORE_PASSWORD" \

Configuring WebLogic server for two-way SSL

cp "$SERVER_JKS_KEYSTORE_FILE" /u01/app/oracle/product/Middleware/wlserver_10.3/server/lib
cp "$SERVER_JKS_TRUST_KEYSTORE_FILE" /u01/app/oracle/product/Middleware/wlserver_10.3/server/lib

In order to leverage the above keystores, it is just a matter of connecting to the Administration Console, then expand Environment and select Servers.
Choose the server for which you want to configure the identity and trust keystores, and select Configuration > Keystores.
Change Keystores to be "Custom Identity and Custom Trust"

You would then fill in the relevant fields (keystore fully qualified path, type (JKS), and keystore access password). e.g.

Custom Identity Keystore: /u01/app/oracle/product/Middleware/wlserver_10.3/server/lib/server.jks
Custom Identity Keystore Type: JKS
Custom Identity Keystore Passphrase: welcome1

Custom Trust Keystore: /u01/app/oracle/product/Middleware/wlserver_10.3/server/lib/server-trust.jks
Custom Trust Keystore Type: JKS
Custom Trust Keystore Passphrase: welcome1


Next, you need to enable the SSL Listen Port for the server:
[Home >Summary of Servers >XXX > ] Configuration > General.

SSL Listen Port: Enabled (Check)
SSL Listen Port: XXXX


Next, you need to tell WebLogic the alias and password in order to access the private key from the Identity Store:
[Home >Summary of Servers >XXX > ] Configuration > SSL.

Identity and Trust Locations: Keystores
Private Key Alias: server
Private Key Passphrase: welcome1


click Advanced at the bottom of the page ([Home >Summary of Servers >XXX > ] Configuration > SSL_
Set the Two Way Client Cert Behaviour attribute to "Client Certs Requested And Enforced"

    Client Certs Not Requested: The default (meaning one-way SSL).
    Client Certs Requested But Not Enforced: Requires a client to present a certificate. If a certificate is not presented, the SSL connection continues.
    Client Certs Requested And Enforced: Requires a client to present a certificate. If a certificate is not presented, the SSL connection is terminated.

Check "Use JSSE SSL" box
Failure to check this box above will likely result in "Cannot convert identity certificate" error when restarting the managed server, and the HTTPS port won't be open for connections.


Restart server.

Configuring Browser client

From Internet Explorer > Tools > Internet Options > Contents > Certificates

Import under the Trusted Root Certification Authorities tab the ca.pem file
- Place all certificate in the following store: "Trusted Root Certification Authorities"
It will popup Security Warning - You are about to install a certificate from a certification authority (CA) claiming to represent.....
Do you want to install this certificate: Yes

Import under the Personal tab the client.p12 file
It will popup with 'To maintain security, the private key was protected with a password'
Supply the password for the private key: welcome1
- Place all certificate in the following store: "Personal"


From Firefox > Tools > Options > Advanced > Encryption > View Certificates
From the "Authorities" tab  , import ca.pem
check "Trust this CA to identify websites"
(It will get listed under the organization "MyOrg" in the tree based on the certificate dname that was leveraged)
From the "Your Certificates" tab, import client.p12
Enter the password that was used to encrypt this certificate backup: welcome1
It should state "Successfully restored your security certificate(s) and private key(s)."

Accessing a server configured for two-way SSL from a Browser

IE - May automatically supply the certificate to the server
Firefox -
"This site has requested that you identify yourself with a certificate"
"Choose a certificate to present as identification"
x Remember this decision

Accessing a server configured for two-way SSL from a Java client

Invoke Java with explicit SSL trust store elements set (to trust server's certificate chain) ...

System.setProperty("", "/C:/Users/mshannon/Desktop/client-trust.jks");
System.setProperty("", "JKS");
System.setProperty("", "welcome1");

Otherwise will see...
" PKIX path building failed: unable to find valid certification path to requested target"

Given we are using two-way SSL, we must also specify the client certificate details ...

System.setProperty("", "/C:/Users/mshannon/Desktop/client.jks");
System.setProperty("", "JKS");
System.setProperty("", "welcome1");

Otherwise will see...
" Received fatal alert: bad_certificate"

Unknown Information – Is it possible through System properties to specify which key (based on alias) to use from the client keystore, and also can a password be provided for such a key (alias)?
Currently we are relying on the fact that the client keystore just contains the one key-pair and that key's alias password entry matches the keystore password!

Configuring WebLogic Server to allow authentication using the client certificate

The above steps should have resulted in two-way SSL transport security.  WebLogic however can also be configured to extract the client username from the client supplied public certificate and map this to a user in the identity store.  To do this, we need to configure Default Identity Asserter to allow authentication using the client certificate.  The deployed web application must also allow client certificate authentication as a login method.

Configuring Default Identity Asserter to support X.509 certificates involves:

1) Connecting to the WebLogic Server Administration Console as an administrator (e.g. weblogic user)
2) Navigating to appropriate security realm (e.g. myrealm)
3) Under the Providers tab and Authentication sub-tab selecting DefaultIdentityAsserter (WebLogic Identity Assertion provider)
4) On the Settings for DefaultIdentityAsserter page under the Configuration tab and Common sub-tab choosing X.509 as a supported token type and clicking Save.
5) On the Settings for DefaultIdentityAsserter page under the Configuration tab and Provider-Specific sub-tab enabling the Default User Name Mapper and choosing the appropriate X.509 attribute within the subject DN to leverage as the username and correcting the delimiter as appropriate and clicking Save.
6) Restarting the WebLogic Server.

The deployed web application must also list CLIENT-CERT as an authentication method in the web.xml descriptor file:


image image

Thursday, August 22, 2013

Synology Remote Shutdown Poweroff BusyBox

Being the energy conserving environmentalist I am (read – tightass) I look for ways to reduce unnecessary power consumption. I have quite a few devices that are plugged in 24x7, but get limited use.  This includes my HTPC setup that comprises a Mac Mini i5 (bootcamp Windows 8) and a Synology DS413j NAS.  In the previous blog post I outlined costs involved in running the Synology DS413j.  The DS413j does not support system hibernation, only disk hibernation.  From my testing, it pulls about 11w continuous when the hard disk was hibernated.  In the grand scheme of things, this is not a big deal.

So what can be done to reduce power consumption?  Synology supports scheduled power-on / power-off events through their DSM.  There is also the very useful Advanced Power Manager package that takes this a step further by preventing the shutdown whilst any detected network/disk activity is above a specific threshold.  This way, if you are a watching a movie and midway reach the scheduled power-off time, the NAS won’t shutdown until such time disk/network activity falls below the configured threshold.

Advanced Power Manager

In the screenshot above, I have configured Power Off times for every day of the week, but not necessarily power on times.  This means the NAS will always shutdown at night if running, but not necessarily start again in the morning automatically.

Whilst the above package is great, I wanted to go a step further and support remote shutdown of the NAS triggered by a desktop shortcut and/or remote control event.  In my HTPC setup, I’m using EventGhost to coordinate infrared remote control events to specific actions. I leverage the On Screen Menu EventGhost plugin capability to display a menu of options rendered on my (television) screen for interacting with the HTPC.  This includes launching XBMC, Suspending and Shutting down the Mac Mini, Sending a Wake On Lan packet to the Synology.  I want to add a new option to this menu to shutdown the NAS.

One would think remote shutdown is pretty simple.  In fact it can be very simple, by simply Enabling SSH on the NAS, and then leveraging something like putty to make a “root” user connection to the NAS supplying a  “Remote command” option like the following


You would simply save the putty session as some specific name (e.g root-shutdown), then trigger it using a shortcut link such "C:\Program Files (x86)\PuTTY\PUTTY.EXE" -load "root-shutdown"

But what if you want to grant the ability to power off the NAS to some non-root user?  Would it not be great to have some user, say the fictitious user “netshutdown”, who simply by connecting to the NAS through telnet / SSH would result in the NAS shutting down?

This would be easily accomplished using something like sudo whereby administration commands can easily be delegated.  However “sudo” is not available in the standard DSM 4.2 install on the DS413j.  Simple then, lets leverage suid on the poweroff executable so that it runs as root.  However, the poweroff executable is actually a symbolic link to /bin/busybox.  Setting suid on busybox also makes no difference:


Setting suid on busybox does however allow “su” to function from a non-root user.


I’m not comfortable setting suid on the busybox executable given pretty much every command in /bin and a number from /sbin are linked to it.  One extreme word of caution!!! Do not make the mistake of executing chmod a-x on the busybox executable or anything linked to it!  You will hose your system.  I was extremely lucky to have perl installed on my NAS, and had not logged out from my root session, and was able to leverage perl’s chmod function to restore pemissions! (what a relief)


If you search for busybox, poweroff, and suid, you find a number of results that talk about employing techniques such as /etc/busybox.conf to call out specific applets and whom can run them, creating c wrapper programs that leverage execve to call busybox, or setuid and system to call /sbin/poweroff etc.  I tried all of these and none of them worked with the compiled busybox executable on my NAS; I would receive the Permission denied / Operation not permitted errors.

Finally however, I cracked it.

I created a wrapper program that rather than call /sbin/poweroff, calls a shell script owned by root, which in turn triggers poweroff.

The program (shutdown.c) is as follows:

#include <stdio.h>     /* printf function */
#include <stdlib.h>    /* system function */
#include <sys/types.h> /* uid_t type used by setuid */
#include <unistd.h>    /* setuid function */

int main()
  printf("Invoking /bin/ ...\n");
  system("/bin/" );
  return 0;

/bin/ is as follows:
echo Triggering poweroff command

I followed the developer guide to determine how to compile c programs specific for the NAS.  In the case of the DS413j, It leverages a Marvell Kirkwood mv6282 ARMv5te CPU.  So I needed to leverage the toolchain for Marvell 88F628x.

As I had no native/physical linux machine available for the compilation, I decided to use VirtualBox on Windows and download / leverage the Lubuntu 12.10 VirtualBox image which is a lightweight version of Ubuntu.  I set the network card to bridged, started the image, authenticated as lubuntu/lubuntu and updated/added some core packages:

sudo apt-get update
sudo apt-get install build-essential dkms gcc make
sudo apt-get install linux-headers-$(uname -r)

sudo -s
cd /tmp
tar zxpf gcc421_glibc25_88f6281-GPL.tgz -C /usr/local/

cat > /tmp/shutdown.c <<EOF
#include <stdio.h>     /* printf function */
#include <stdlib.h>    /* system function */
#include <sys/types.h> /* uid_t type used by setuid */
#include <unistd.h>    /* setuid function */

int main()
  printf("Invoking /bin/ ...\n");
  system("/bin/" );
  return 0;

/usr/local/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc shutdown.c -o shutdown

lubuntu@lubuntu-VirtualBox:/tmp$ /usr/local/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc shutdown.c -o shutdown
lubuntu@lubuntu-VirtualBox:/tmp$ ls -ltr
total 16
-rw-rw-r-- 1 lubuntu lubuntu  308 Aug 22 01:29 shutdown.c
-rwxrwxr-x 1 lubuntu lubuntu 6715 Aug 22 01:29 shutdown

Now that the shutdown executable was created, I uploaded it to the NAS to the /bin directory. 

Next I needed the user/group account infrastructure in place on the NAS in order to trigger it.  I created a user named “netshutdown” and a group named “shutdown” using the DiskStation Web UI Control Panel User/Group widgets.  I also made sure the SSH service was enabled (Control Panel > (Network Services >) Terminal > Enable SSH Service).

If you try and SSH leveraging username/password authentication to the NAS as the newly created user, you will see that you are not presented with a shell.  This is because Synology locks the user down, which can be seen by viewing the passwd file connected as root:

media> cat /etc/passwd
ftp:x:21:21:Anonymous FTP User:/nonexist:/sbin/nologin
anonymous:x:21:21:Anonymous FTP User:/nonexist:/sbin/nologin
smmsp:x:25:25:Sendmail Submission User:/var/spool/clientmqueue:/sbin/nologin
postfix:x:125:125:Postfix User:/nonexist:/sbin/nologin
dovecot:x:143:143:Dovecot User:/nonexist:/sbin/nologin
spamfilter:x:783:1023:Spamassassin User:/var/spool/postfix:/sbin/nologin
admin:x:1024:100:System default user:/var/services/homes/admin:/bin/sh

Notice the netshutdown user has the shell set as “/sbin/nologin”, and the home directory set to “/var/services/homes/netshudown”.  There is no such “homes” directory on my instance.

I edited /etc/passwd and changed

65536 is the group id of the new shutdown group:

media> cat /etc/group

I then created the home directory for the user:
mkdir -p /home/netshutdown
chown netshutdown /home/netshutdown

Test it out …

media> su - netshutdown

BusyBox v1.16.1 (2013-04-16 20:15:54 CST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

media> pwd

My user/group was now in place, so it was time to set permissions and configure the shutdown executable and shell script :-

cd /bin
chown root.shutdown shutdown
chmod 4750 shutdown

media> ls -la shutdown*
-rwsr-x---    1 root     shutdown      6715 Aug 22 09:31 shutdown

cat > /bin/ <<EOF
echo Triggering poweroff command

media> ls -la
-rw-r--r--    1 root     root            48 Aug 22 09:35

chmod 700

media> ls -la shutdown*
-rwsr-x---    1 root     shutdown      6715 Aug 22 09:31 shutdown
-rwx------    1 root     root            48 Aug 22 09:35

For the netshutdown user to automatically trigger the shutdown executable on connection, I had a few different options:

Option 1) Change the user’s login shell from /bin/sh to be /bin/shutdown
Option 2) Create a .profile file for the user, and trigger the /bin/shutdown command from the user’s .profile

I decided on the latter.

echo "/bin/shutdown" > /home/netshutdown/.profile
chown netshutdown /home/netshutdown/.profile

Now it was time to test the power off …


At this stage, we can now easily create a new putty session that connects via SSH as user netshutdown with username/password authentication.  We can invoke that saved session using a shortcut such as PUTTY.EXE -load "XXX Sesion Name”.

If you are a sucker for punishment, you can make this thing a bit more complex by utilizing public key authentication (rather than username/password).  In the most simple form, you leverage a tool such as puttygen.exe to generate a keypair (private key and public certificate) for a particular user.  You enable public key authentication on your NAS, and upload the public certificate of the user to the NAS. You then configure putty to authenticate using public key authentication and point it to the location of your private key.  You can go a step further by encrypting your private key so that a passphrase must be supplied on login in order to extract the private key and authenticate.  You can also utilize the Putty Authentication Agent (Pageant) to store the decrypted private key in memory, and have putty sessions consult Pageant for the private key on authentication.  This blog post does a good job of describing the options.

To get public key authentication up and running quickly on your NAS follow steps similar to the following :

1) Run puttygen.exe
2) Generate new SSH-2 RSA Key without a keyphrase
3) Save public key and private key to files (e.g publickey.txt and privatekey.ppk)
4) (This should have already been done) From Synology DiskStation UI, Go to Control Panel > (Network Services >) Terminal > Enable SSH Service

5) Next SSH using putty.exe to the NAS as the root user

6) Edit /etc/ssh/sshd_config

uncomment the following two lines
#PubkeyAuthentication yes
#AuthorizedKeysFile     .ssh/authorized_keys

and save the file

7) Connect as the end-user concerned, and create ~/.ssh directory, and create the file ~/.ssh.ssh/authorized_keys

8) Add the public key text from above in to the authorized_keys file and save it ...

ssh-rsa AAAAB3Nza......== ....

9) Change the permissions
chmod 700 ~/.ssh
chmod 644 ~/.ssh/authorized_keys

10) Open Putty and make the following changes to the session ...

Connection type: SSH
Connection->Data->Auto-login username: netshutdown
Connection->SSH->Auth->Private Key: Your Private Keyfile from above

11) Save the session - e.g. netshutdown-media-nas

12) Create a new shortcut on your desktop with the following target "C:\Program Files\PuTTY\putty.exe" -load "netshutdown-media-nas"

Notes - The private key above contains no passphrase, and is essentially equivalent to having a password in clear text stored in a text file on the desktop.  Where security is required, configure the private key with a passphrase.  You can then run/utilize Pageant to store effectively the unencrypted private key in memory by supplying the encryption passphrase; this then prevents the continuous password prompting on establishing each new session.

Synology DS413J Power Consumption

I just purchased the entry level Synology DS413j to use for my HTPC media storage.  It is running DSM 4.2 3211  The Synology specs state that in HDD hibernation mode, the DS413j should use 7.68 watts.  From my testing using an Arlec 240v power meter with just a single 3TB Western Red WD30EFRX installed, the Synology was reading 15w when running, and 11w after HDD hibernation kicked in due to 20 minute disk inactivity.  I’m a little annoyed that it is not performing at the advertised 7.68 watts – that is about $6 per year in energy at my tariff of 21.351 cents/kWH.  I imagine in order to get that value, Synology must have stripped the NAS down to the bare services required. 

The DS413 model supposedly can do full system hibernation and drop to 3.37w.  If your NAS will 99% of the time be idle, but powered on 24x7, and assuming the 21.351 cents/kWH tariff value from above, you are looking at $6.30 per year to run the DS413 at 3.37w versus $20.57 for the DS413J at 11w:

DS413J with only HDD hibernation –> 365 days per year * 24 hours per day * .011 KW * .21351 $/kWH (Origin Energy) = $20.57 per year to run.

DS413 with system hibernation –> 365 days per year * 24 hours per day * .00337 KW * .21351 $/kWH (Origin Energy) = $6.30 per year to run.

Thus, I could save $14 in a year, by purchasing the DS413 over the DS413J.

DS413j = $365 ; DS413 = $519.    (519 – 365) / 14 = 11.   It would take 11 years for the DS413 to pay for itself on energy savings assuming I never use the NAS :)

If on the other hand, you are actively using the NAS 24x7 and fully loaded with 4 disks, you are looking at around 35w for both DS413 and DS413J.

365 days per year * 24 hours per day * .035 KW * .21351 $/kWH (Origin Energy) = $65.46 per year to run.

Thursday, May 16, 2013

Unused CSS rules - Optimize HTML pages by removing unreferenced CSS - for free!

Google provides some very useful tools and services that can be leveraged to audit web page performance and thus assist in optimizing HTML pages.  However these do have their warts.

Page Speed for example provides a free online audit check, as well as Firefox and Chrome browser plugins that can be leveraged to analyse a web page’s performance and assist with optimization.  There are two drawbacks with these tools/service

  1. The HTML file must be hosted on a HTTP server and a local file cannot be utilized.  Otherwise a message such as the following will occur “Unable to run PageSpeed Insights on file:///C:/Users/mshannon/Desktop/test2.html. Please navigate to a HTTP(s) web page.”
  2. The browser extensions can be used to “minify” both the HTML and CSS (by removing whitespace / formatting etc), but do not seem to accurately be able to detect unreferenced CSS

My solution to the hosted HTML files was to piggyback on dropbox.  I created myself a public folder in my dropbox account and uploaded the html and css files.  I then right-clicked on the uploaded files and obtained a public link (for example ..

Once I had the hosted HTML files, I could leverage the Page Speed service and utilize the browser developer tools.  The firefox Page Speed extension provides an addon to Firebug.  You can see in the image below it has an option to save optimized files to a local location.


I leveraged the Firefox Page Speed extension purely to get the cleaned/formatted html and css files.

What was left now to perform was to remove the unreferenced CSS rules.

Google chrome bundles a useful (crippled) developer tool that can be leveraged to audit web page performance and identify unused CSS rules.   Unlike Page Speed, this particular tool does a great job of identifying unreferenced rules.  It is accessed from Chrome through the following menu/popup combination: Tools > Developer Tools > Audits > Web Page Performance > Run


You can see by running the tool below, it found a huge number of unreferenced rules:


The problem with this tool however, is that it does not provide a mechanism to write out the good (referenced/used) CSS rules.

A bit of searching around, and I found this excellent post:

Essentially, the dude had patched the chrome developer tools to write out the good css rules when running the audit.  The problem for me though, was this was done using Linux builds, and also I wasn’t sure whether newer builds of chrome would offer better detection capabilities.

So I set about getting the latest windows chrome builds from

What I found however, is that they stopped shipping at build 158804. At the time of writing this, they were now up to build 200347.  Additionally, the contents of with build 158804 did not seem to match the files that boostraponline referenced in his/her patch.  So I went trawling backwards in builds until I found the very last version to ship with AuditRules.js inside  This was build 152197 which is circa August 2012.

You need the following two files:

Next …

  1. extract build 152197 of (e.g. to desktop) ; this automatically creates a folder named "chrome-win32"
  2. extract build 152197 of to a *new* folder named "devtools_frontend" under "chrome-win32"; the "devtools_frontend" folder needs to be created
  3. create a new folder named "user-data" under "chrome-win32"
  4. create a run.bat file to trigger execution of chrome similar to the following:

@echo off
set CHROMEDIR=C:\Users\mshannon\Desktop\chrome-win32
%CHROMEDIR%\chrome.exe --user-data-dir=%CHROMEDIR%\user-data --debug-devtools-frontend=%CHROMEDIR%\devtools_frontend

Execute the bat file and see if chrome successfully loads.  Once confirmed, close down chrome.

The patch diff that bootstraponline provides is mostly accurate.  The one change though is you must now utilize “"used.css", usedCss, true);”

You can download the patched AuditRules.js file for the above build 152197 from my dropbox:

Simple replace devtools_frontend\AuditRules.js with the file above.

Having performed the patch, fire up chrome from the bat file.  Invoke the audit tool against the web page (or local file) and you will be presented with a save-as box should it detect unreferenced CSS rules. The file to be saved (used.css) contains the rules that were referenced.


Thanks bootstraponline and Google!

Saturday, April 13, 2013

Recover / Decrypt Weblogic password from

When installing a Weblogic domain in development mode, the Configuration wizard will generate a boot identity file for the administration server containing the encrypted username and password of the initial administrative user. These credentials are then automatically leveraged when starting the admin server and avoid the need for the weblogic administrator to manually supply these. It is also possible to utilize a boot identify file ( in production domains.    See the following link for more information:

Recovering/decrypting a credential value from the boot identity file is reasonably straightforward should you have shell and executable access to the Weblogic installation.

First, obtain the DOMAIN_HOME value …

ps auxwww | grep Name=AdminServer | tr " " "\n" | grep "domain.home"


Next, source the file …

export DOMAIN_HOME=/u01/app/oracle/product/Middleware/user_projects/domains/base_domain

source $DOMAIN_HOME/bin/

Extract the encrypted username and password credential from the boot identify file ...

USR=`grep username $DOMAIN_HOME/servers/AdminServer/security/ | sed -e "s/^username=\(.*\)/\1/"`

PW=`grep password $DOMAIN_HOME/servers/AdminServer/security/ | sed -e "s/^password=\(.*\)/\1/"`

Sample values …

mshannon@slc05elc% echo $USR

mshannon@slc05elc% echo $PW

Create the small java Decrypt program and invoke it supplying the DOMAIN_HOME and encrypted value requiring decryption …

cat > /tmp/ <<EOF
public class Decrypt {
  public static void main(String[] args) {
    System.out.println("Decrypted value: " + new[0])).

$JAVA_HOME/bin/javac -d /tmp /tmp/

$JAVA_HOME/bin/java -cp /tmp:$CLASSPATH Decrypt "$DOMAIN_HOME" "$USR"

$JAVA_HOME/bin/java -cp /tmp:$CLASSPATH Decrypt "$DOMAIN_HOME" "$PW"

Sample output … 

mshannon@slc05elc% $JAVA_HOME/bin/java -cp /tmp:$CLASSPATH Decrypt "$DOMAIN_HOME" "$USR"
Decrypted value: weblogic

mshannon@slc05elc% $JAVA_HOME/bin/java -cp /tmp:$CLASSPATH Decrypt "$DOMAIN_HOME" "$PW"
Decrypted value: welcome1

Wednesday, March 27, 2013

Simple Chrome black background dark theme extension - How to build one in 5 minutes

There are times when a white background is best replaced with a black one.  If you are using Chrome, you can build your own custom extension in just a few minutes that can easily modify colours of your favourite website(s). To do this, we leverage the Content Scripts feature available to Chrome Extensions that is kind of like a poor man’s version of GreaseMonkey.  Best of all though, there are no closed custom 3rd party Chrome extensions to install from some unknown developer, here you will be the developer!

Content Scripts provide a mechanism to manipulate the DOM. The DOM is essentially a tree of all objects that constitute the webpage (images / links / text / styles etc).

What we are going to do is create an unpacked extension that simply adds a new stylesheet link at the end of the page load which will change the background colour to black, and the text/link/visited-link colours to various shades of grey. For our test, we will modify all pages falling under and

To get started, first create yourself a folder (on your Desktop or wherever) that will host the two files that comprise our extension, for example "Black Background Extension"

Within this folder, create a file named toggle.js that has the following contents:

var cssStyle='* { background: black !important; color: #EEEEEE !important }'
+ ' :link, :link * { color: #A1A1A1 !important }'
+ ' :visited, :visited * { color: #505050 !important }';

if(document.createStyleSheet) {
} else {
  var cssLink = document.createElement('link');
  cssLink.rel = 'stylesheet';
  cssLink.href = 'data:text/css,'+escape(cssStyle);

Next, create a file named manifest.json that has the following contents:

  "name": "Black Background",
  "version": "1.0",
  "description": "Sets background colour to black, and text, link, and visited link to shades of grey",
  "manifest_version": 2,
  "content_scripts": [
      "matches": ["http://**", "http://**"],
      "js": ["toggle.js"],
      "run_at": "document_end",
      "all_frames": true

Now, it is a simple matter of firing up Chrome, and typing in chrome://extensions in the address bar. Once the Extensions are displayed, activate the Developer mode option.


Next, choose the Load unpacked extension… button and navigate to the folder created above hosting our unpacked extension.


If all goes to plan, our extension should now be listed.  We also have the option of packing our extension in to a signed zip file that would allow us to redistribute it.


Finally, simply restart the Chrome browser and attempt to access a site from our match rules ( for example).  You should see that the background colour is now black!


For details on the content_scripts options leveraged in our manifest, refer to the Chrome documentation


Wednesday, March 20, 2013

How to determine if an Oracle LOB is stored as a SECUREFILE or BASICFILE

The DESCRIBE command on an Oracle table is not sufficient to determine whether a LOB column is stored as a SECUREFILE or a regular old BASICFILE. Instead you must query USER_LOBS (or DBA_LOBS etc), or alternatively leverage the PL/SQL dbms_lob.issecurefile function.

% sqlplus

SQL*Plus: Release Production on Tue Mar 19 18:04:53 2013

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show parameter compatible;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string

SQL> show parameter db_securefile;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_securefile                        string      PERMITTED


The DB_SECUREFILE parameter specifies whether or not to treat LOB files as SecureFiles by default.

NEVER - LOBs that are specified as SecureFiles are created as BasicFile LOBs. Any SecureFile-specific storage options specified will result in an exception.
PERMITTED - LOBs are allowed to be created as SecureFiles, but will be created as BasicFile by default.
ALWAYS - All LOBs created in the system are created as SecureFile LOBs.
IGNORE - The SECUREFILE keyword and all SecureFile options are ignored.

If the COMPATIBLE parameter is not set to 11.1 or higher, then LOBs are not treated as SecureFiles.


SQL> create user matt identified by welcome1;

User created.

SQL> grant create session to matt;

Grant succeeded.

SQL> grant create table to matt;

Grant succeeded.

SQL> grant unlimited tablespace to matt;

Grant succeeded.

SQL> conn matt/welcome1

  lob1 BLOB
,lob2 BLOB
,lob3 BLOB
,lob4 BLOB

Table created.

Logging options:

LOGGING - LOB changes generate full entries in redo logs
NOLOGGING - LOB changes are not logged in the redo logs and cannot be replayed in the event of failure.

Caching options

CACHE - LOB data is placed in the buffer cache.
CACHE READS - LOB data is only placed in the buffer cache only during read operations but not during write operations.
NOCACHE - LOB data is not placed in the buffer cache, or brought in to the buffer cache and placed at the least recently used end of the LRU list.

SecureFile LOBs also support FILESYSTEM_LIKE_LOGGING logging option which is similar to metadata journaling of file systems


SQL> desc test;
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
LOB1                                               BLOB
LOB2                                               BLOB
LOB3                                               BLOB
LOB4                                               BLOB

SQL> desc user_lobs;
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
TABLE_NAME                                         VARCHAR2(30)
COLUMN_NAME                                        VARCHAR2(4000)
SEGMENT_NAME                                       VARCHAR2(30)
TABLESPACE_NAME                                    VARCHAR2(30)
INDEX_NAME                                         VARCHAR2(30)
CHUNK                                              NUMBER
PCTVERSION                                         NUMBER
RETENTION                                          NUMBER
FREEPOOLS                                          NUMBER
CACHE                                              VARCHAR2(10)
LOGGING                                            VARCHAR2(7)
ENCRYPT                                            VARCHAR2(4)
COMPRESSION                                        VARCHAR2(6)
DEDUPLICATION                                      VARCHAR2(15)
IN_ROW                                             VARCHAR2(3)
FORMAT                                             VARCHAR2(15)
PARTITIONED                                        VARCHAR2(3)
SECUREFILE                                         VARCHAR2(3)
SEGMENT_CREATED                                    VARCHAR2(3)


set linesize 100

col Column format a6
col isSecureFile format a12
col Compressed format a10
col DeDuplicated format a12
col Encrypted format a9
col StoredInRow format a11
col Logging format a7
col Cached format a10

  column_name as "Column"
,securefile as "isSecureFile"
,compression as "Compressed"
,deduplication as "DeDuplicated"
,encrypt as "Encrypted"
,in_row as "StoredInRow"
,logging as "Logging"
,cache as "Cached"
FROM user_lobs
WHERE table_name = 'TEST'

Column isSecureFile Compressed DeDuplicated Encrypted StoredInRow Logging Cached
------ ------------ ---------- ------------ --------- ----------- ------- ----------
LOB1   NO           NONE       NONE         NONE      YES         YES     NO
LOB2   NO           NONE       NONE         NONE      YES         YES     NO
LOB3   YES          NO         NO           NO        YES         YES     NO
LOB4   YES          MEDIUM     LOB          NO        YES         YES     NO



insert into test values(empty_blob(), empty_blob(), empty_blob(), empty_blob())

set serveroutput on

l1 BLOB; l2 BLOB; l3 BLOB; l4 BLOB;
  SELECT lob1, lob2, lob3, lob4
  INTO l1, l2, l3, l4
  FROM test
  WHERE rownum = 1;

  IF dbms_lob.issecurefile(l1) THEN
    dbms_output.put_line('Stored in a securefile');
    dbms_output.put_line('Not stored in a securefile');

  IF dbms_lob.issecurefile(l2) THEN
    dbms_output.put_line('Stored in a securefile');
    dbms_output.put_line('Not stored in a securefile');

  IF dbms_lob.issecurefile(l3) THEN
    dbms_output.put_line('Stored in a securefile');
    dbms_output.put_line('Not stored in a securefile');

  IF dbms_lob.issecurefile(l4) THEN
    dbms_output.put_line('Stored in a securefile');
    dbms_output.put_line('Not stored in a securefile');


Not stored in a securefile
Not stored in a securefile
Stored in a securefile
Stored in a securefile

PL/SQL procedure successfully completed.

Wednesday, March 6, 2013

Disable Adobe Reader XI (11.x) Welcome Screen


Create and execute a registry file (disablewelcomescreen.reg) with contents:


[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Adobe\Acrobat Reader\11.0\FeatureLockDown\cWelcomeScreen]

Alternatively, open regedit and then navigate to Computer > HKEY_LOCAL_MACHINE > SOFTWARE > Policies > Adobe > Acrobat Reader > 11.0 > FeatureLockDown.

right click on FeatureLockDown and choose New > Key.

name the key cWelcomeScreen

right click the cWelcomeScreen key and choose New > DWORD (32-bit) Value

name the DWORD value bShowWelcomeScreen

leave the value as 0

Tuesday, February 26, 2013

xargs - is command executed once or multiple times – examples

xargs is used to execute a command one or potentially multiple times using the standard input as arguments to that command.

What sometimes in unclear is whether the command invoked by xargs is executed once or multiple times.

This blog post should clear things up …

Firstly, here is the xargs program usage in layman’s terms

xargs [xargs options] [command] [arguments for command based on xargs standard input]

The standard input would typically be output that was piped from an earlier command such as find / ls / echo etc.  It doesn’t have to be though, you can invoke xargs as a standalone no-arg command and simply type the input and terminate with a control-d to signal end-of-terminal.

Let’s try a simple example combining the xargs --verbose option so that we can see the command-line that xargs will execute on the standard error output before it is actually invoked. In the screenshot below, I invoke the xargs command, and then enter 1 to 5 separating each with a newline (enter), followed by a terminating ctrl-d.  As I provided the --verbose option to xargs, it wrote out the command that it will execute and the arguments that it is going to provide to that command.


The output above shows that be default if you don’t provide an explicit command for xargs to invoke, it will leverage /bin/echo. You can also see xargs invoked this echo command just a single time. What may not be obvious is how xargs processed the standard input to come up with arguments to supply the (in this case 'echo') command. xargs will by default treat whitespace and newlines as delimiters. In the example below, I entered: 1 TAB 2 NEWLINE 3 SPACE 4 NEWLINE 5 NEWLINE ctrl-d.


So how and under what conditions does the command that xargs executes get invoked multiple times?

Well.. you can either explicitly tell xargs that a particular command can only operate on a specific number of arguments at a time, and that the command should be reinvoked as required to work through the remaining arguments; OR … xargs may determine itself that the command to execute along with any arguments hits a maximum command-line length, in which case it automatically splits the arguments across multiple command invocations.

Let’s first explicitly tell xargs that a command invocation should only work on a maximum number of arguments at a time.  You do this through the -n option …


The above example first shows supplying a “-n1” option which results in a command being invocated for each argument. Later the “-n2” option is leveraged which result in a command being invoked for every two arguments. The final xargs test above shows how 1 TAB 2 NEWLINE 3 SPACE 4 NEWLINE 5 NEWLINE ctrl-d is processed with the “-n1” option.  You can see that xargs immediately starts invoking commands after each line of input is processed.

As mentioned prior, xargs may itself automatically split arguments across multiple commands if a maximum command-line character length is reached. The xargs binary will likely have a default size limit hardcoded to the operating system ARG_MAX length.


You can explicitly tell xargs the max command-line length using the --max-chars or -s options.  The length comprises the command and initial-arguments and the terminating nulls at the ends of the argument strings.  As seen below “/bin/echo 1” will take 12 chars, and “/bin/echo 1 2” will take 14 chars (including terminating null).

/bin/echo 1 2

Let’s try out this “-s” option…


Here is a final example that ties everything together.  It demonstrates the following:

  1. xargs receiving piped standard-input from a prior command (i.e tr / find).
  2. xargs using the --verbose option to output the command that will be invoked
  3. xargs being told to leverage an explicit delimiter “-d” option, e.g “\n” – newline
  4. xargs invoking an explicit command (the Unix “file” command)
  5. xargs invoking an explicit command once per argument (“-n1”)
  6. find command leveraging the “-print0” option to delimit search results by the null character (rather than newline), in conjunction with the “-0” (or --null) xargs option so that any search results which contain whitespace are treated correctly as a single command-argument.