Search this blog ...

Thursday, January 28, 2021

Manually verifying RSA SHA Signature in Java using Cipher

The post below may be helpful in understanding how a signature is generated and verified.

Sample code:

import java.security.KeyPair;

import java.security.KeyPairGenerator;

import java.security.MessageDigest;

import java.security.PrivateKey;

import java.security.PublicKey;

import java.security.Signature;


import java.util.Arrays;


import javax.crypto.Cipher;


import sun.security.util.DerInputStream;

import sun.security.util.DerValue;


public class RSASignatureVerification

{

public static void main(String[] args) throws Exception

{

KeyPairGenerator generator = KeyPairGenerator.getInstance("RSA");

generator.initialize(2048);


KeyPair keyPair = generator.generateKeyPair();

PrivateKey privateKey = keyPair.getPrivate();

PublicKey publicKey = keyPair.getPublic();


String data = "hello mshannon";

byte[] dataBytes = data.getBytes("UTF8");


Signature signer = Signature.getInstance("SHA512withRSA");

signer.initSign(privateKey);


signer.update(dataBytes);


byte[] signature = signer.sign(); // signature bytes of the signing operation's result.


Signature verifier = Signature.getInstance("SHA512withRSA");

verifier.initVerify(publicKey);

verifier.update(dataBytes);


boolean verified = verifier.verify(signature);

if (verified)

{

System.out.println("Signature verified!");

}


/*

    The statement that describes signing to be equivalent to RSA encrypting the

    hash of the message using the private key is a greatly simplified view

    The decrypted signatures bytes likely convey a structure (ASN.1) encoded

    using DER with the hash just one component of the structure.

*/


// lets try decrypt signature and see what is in it ...

Cipher cipher = Cipher.getInstance("RSA");

cipher.init(Cipher.DECRYPT_MODE, publicKey);


byte[] decryptedSignatureBytes = cipher.doFinal(signature);


/*

    sample value of decrypted signature which was 83 bytes long


    30 51 30 0D 06 09 60 86 48 01 65 03 04 02 03 05

    00 04 40 51 00 41 75 CA 3B 2B 6B C0 0A 3F 99 E3

    6B 7A 01 DC F2 9B 36 E6 0D D4 31 89 53 A3 D9 80

    6D AE DD 45 7E 55 45 01 FC C8 73 D2 DD 8D E5 B9

    E0 71 57 13 41 D0 CD FF CA 58 01 03 A3 DD 95 A1

    C1 EE C8


    Taking above sample bytes ...

    0x30 means A SEQUENCE - which contains an ordered field of one or more types.

    It is encoded into a TLV triplet that begins with a Tag byte of 0x30.

    DER uses T,L,V (tag bytes, length bytes, value bytes) format


    0x51 is the length = 81 decimal (13 bytes)


    the 0x30 (48 decimal) that follows begins a second sequence


    https://tools.ietf.org/html/rfc3447#page-43

    the DER encoding T of the DigestInfo value is equal to the following for SHA-512

    0D 06 09 60 86 48 01 65 03 04 02 03 05 00 04 40 || H

    where || is concatenation and H is the hash value.


    0x0D is the length = 13 decimal (13 bytes)


    0x06 means an OBJECT_ID tag

    0x09 means the object id is 9 bytes ...


    https://docs.microsoft.com/en-au/windows/win32/seccertenroll/about-object-identifier?redirectedfrom=MSDN


    taking 2.16.840.1.101.3.4.2.3 - (object id for SHA512 Hash Algorithm)


    The first two nodes of the OID are encoded onto a single byte.

    The first node is multiplied by the decimal 40 and the result is added to the value of the second node

    2 * 40 + 16 = 96 decimal = 60 hex

    Node values less than or equal to 127 are encoded on one byte.

    1 101 3 4 2 3 corresponds to in hex 01 65 03 04 02 03

    Node values greater than or equal to 128 are encoded on multiple bytes.

    Bit 7 of the leftmost byte is set to one. Bits 0 through 6 of each byte contains the encoded value.

    840 decimal = 348 hex

    -> 0000 0011 0100 1000

    set bit 7 of the left most byte to 1, ignore bit 7 of the right most byte,

    shifting right nibble of leftmost byte to the left by 1 bit

    -> 1000 0110 X100 1000 in hex 86 48


    05 00          ; NULL (0 Bytes)


    04 40          ; OCTET STRING (0x40 Bytes = 64 bytes

    SHA512 produces a 512-bit (64-byte) hash value


    51 00 41 ... C1 EE C8 is the 64 byte hash value

*/


// parse DER encoded data

DerInputStream derReader = new DerInputStream(decryptedSignatureBytes);


byte[] hashValueFromSignature = null;


// obtain sequence of entities

DerValue[] seq = derReader.getSequence(0);

for (DerValue v : seq)

{

if (v.getTag() == 4)

{

hashValueFromSignature = v.getOctetString(); // SHA-512 checksum extracted from decrypted signature bytes

}

}


MessageDigest md = MessageDigest.getInstance("SHA-512");

md.update(dataBytes);


byte[] hashValueCalculated = md.digest();


boolean manuallyVerified = Arrays.equals(hashValueFromSignature, hashValueCalculated);

if (manuallyVerified)

{

System.out.println("Signature manually verified!");

}

else

{

System.out.println("Signature could NOT be manually verified!");

}

}

}


Friday, October 11, 2019

BLOB BFILE SHA256 calculation from Oracle Database using PL/SQL SQL by way of Java stored procedure

The code below demonstrates a method to calculate from the Oracle Database an SHA-256 checksum of a BLOB or BFILE by way of a Java stored procedure which in turn can be triggered from a SQL DML statement or PL/SQL block.

REM -- we don't want the ampersand in source below interpreted
SET DEFINE OFF


CREATE OR REPLACE AND RESOLVE JAVA SOURCE NAMED "IOUtils" AS
/* MShannon 2019 */
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;

import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;

import oracle.jdbc.OracleBfile;
import oracle.sql.BLOB;

public class IO
{
public static String getSHA256ChecksumHexEncoded(OracleBfile b) throws Exception
{
InputStream is = null;
try
{
is = new BufferedInputStream(b.getBinaryStream());

return getSHA256ChecksumHexEncoded(is);
}
finally
{
streamClose(is);
}
}

public static String getSHA256ChecksumHexEncoded(BLOB b) throws Exception
{
InputStream is = null;
try
{
is = new BufferedInputStream(b.getBinaryStream());

return getSHA256ChecksumHexEncoded(is);
}
finally
{
streamClose(is);
}
}

public static String getSHA256ChecksumHexEncoded(InputStream is) throws IOException
{
byte[] bytes = getDigest(is, "SHA-256");

return toHex(bytes);
}

public static byte[] getDigest(InputStream is, String algorithm) throws IOException
{
byte[] bytes = new byte[262144];
MessageDigest md = null;
try
{
md = MessageDigest.getInstance(algorithm);

int bytesRead = 0;
do
{
bytesRead = is.read(bytes);

if (bytesRead > 0)
{
md.update(bytes, 0, bytesRead);
}
}
while (bytesRead != -1);

return md.digest();
}
catch (NoSuchAlgorithmException e)
{
String msg = String.format("Failed to compute the checksum. No such algorithm %s. Error: %s", algorithm,
e.getMessage());

throw new Error(msg, e);
}
}

public static void streamClose(InputStream in)
{
if (in != null)
{
try
{
in.close();
}
catch (IOException ignore)
{
}
}
}

public static String toHex(byte[] bytes)
{
if (bytes == null)
{
return null;
}

StringBuilder sb = new StringBuilder(bytes.length * 2);
for (int i = 0; i < bytes.length; i++)
{
sb.append(Character.forDigit((bytes[i] & 0xf0) >> 4, 16));
sb.append(Character.forDigit(bytes[i] & 0x0f, 16));
}

return sb.toString();
}
}
/

CREATE OR REPLACE FUNCTION hash_sha256_bfile (p_bfile in BFILE) RETURN VARCHAR2 AS LANGUAGE JAVA
NAME 'IO.getSHA256ChecksumHexEncoded(oracle.jdbc.OracleBfile) return String';
/

CREATE OR REPLACE FUNCTION hash_sha256_blob (p_blob in BLOB) RETURN VARCHAR2 AS LANGUAGE JAVA
NAME 'IO.getSHA256ChecksumHexEncoded(oracle.sql.BLOB) return String';
/


SET DEFINE ON


In the example use-case below we calculate an SHA-256 checksum of the file /etc/hosts present on the Database server. We first try SHA-256 checksum calculation direct from a BFILE.  We subsequently perform SHA-256 checksum calculation using a BLOB.

REM -- AS APPROPRIATE PRIVILEGED USER (e.g DBA) - CREATE DIRECTORY OBJECT
CREATE DIRECTORY FILEUPLOADS AS '/etc';
REM -- FOR TESTING ONLY (NOT FOR PRODUCTION) ALLOW EVERYONE TO READ DIR FILES
GRANT READ ON DIRECTORY FILEUPLOADS TO public;

SET SERVEROUTPUT ON

DECLARE
  l_bfile BFILE := BFILENAME('FILEUPLOADS', 'hosts');
  l_result VARCHAR2(64);
BEGIN
  DBMS_LOB.FILEOPEN(l_bfile, DBMS_LOB.LOB_READONLY);

  SELECT hash_sha256_bfile(l_bfile) INTO l_result FROM dual;
  DBMS_OUTPUT.PUT_LINE('Result=' || l_result);

  -- Close lob objects
  DBMS_LOB.CLOSE(l_bfile);
END;
/

DECLARE
  l_bfile BFILE := BFILENAME('FILEUPLOADS', 'hosts');
  l_blob BLOB;
  l_result VARCHAR2(64);
BEGIN
  DBMS_LOB.FILEOPEN(l_bfile, DBMS_LOB.LOB_READONLY);
  DBMS_LOB.CREATETEMPORARY(l_blob,TRUE, DBMS_LOB.SESSION);

  DBMS_LOB.LOADFROMFILE(
        dest_lob => l_blob
      , src_lob  => l_bfile
      , amount   => DBMS_LOB.LOBMAXSIZE
      , dest_offset   => 1
      , src_offset   => 1);

  SELECT hash_sha256_blob(l_blob) INTO l_result FROM dual;
  DBMS_OUTPUT.PUT_LINE('Result=' || l_result);

  -- Close lob objects
  DBMS_LOB.CLOSE(l_bfile);
  DBMS_LOB.FREETEMPORARY(l_blob);
END;
/

I hope this helps someone!

Friday, July 10, 2015

Java SSL HttpUrlConnection Performance Slow using TLS 1.0 with CBC

The fix Oracle implemented in the JVM to combat the BEAST attack can have a significant performance impact when using TLS 1.0 with CBC.  This is particularly noticeable when performing large streaming uploads with HttpURLConnection using the setFixedLength streaming mode (rather than its default mode where it buffers the request payload in full).

When performing writes to HttpURLConnection's OutputStream in setFixedLength streaming mode using a BufferedOutputStream based on the default 8k buffer [OutputStream out = new BufferedOutputStream(uc.getOutputStream())], you can see a pattern like that below when running with the system property -Djavax.net.debug=ssl,handshake set.

Java 6 1.6.0_91
%% Cached client session: [Session-1, TLS_RSA_WITH_AES_128_CBC_SHA]
...
main, WRITE: TLSv1 Application Data, length = 32
main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 32
main, WRITE: TLSv1 Application Data, length = 16416
...


Java 7 1.7.0_15
%% Cached client session: [Session-1, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
...
main, WRITE: TLSv1 Application Data, length = 32
main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 32
main, WRITE: TLSv1 Application Data, length = 16416
...

When using Java 8 and TLS 1.2, there are none of the 32 byte packets in the output …

Java 8 1.8.0_40
%% Cached client session: [Session-1, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
...
main, WRITE: TLSv1.2 Application Data, length = 16432

main, WRITE: TLSv1.2 Application Data, length = 16432
main, WRITE: TLSv1.2 Application Data, length = 16432
main, WRITE: TLSv1.2 Application Data, length = 16432
...

If I set the system property "-Djsse.enableCBCProtection=false" with Java 6 (disabling the BEAST attack fix), the 32 byte packets disappear ...

%% Cached client session: [Session-1, TLS_RSA_WITH_AES_128_CBC_SHA]
...
main, WRITE: TLSv1 Application Data, length = 16416

main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 16416
...

As disabling the CBC protection is not viable in production, I looked at what could be done to minimize the occurrence of the 32 byte packets when using TLS 1.0 with CBC.  In turns out by increasing the buffer size of the BufferedOutputStream wrapping HttpURLConnection’s OutputStream from the default 8kb to something much larger e.g. to 256kb, the number of 32 byte packets reduced significantly resulting in a significant performance increase. 

Java 7 1.7.0_15 with 32k buffer
%% Cached client session: [Session-1, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
...
main, WRITE: TLSv1 Application Data, length = 32
main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 32

main, WRITE: TLSv1 Application Data, length = 16416
main, WRITE: TLSv1 Application Data, length = 16416
...

The larger buffer however as expected had minimal (or no) impact with Java 1.8 based on the TLS 1.2 connection.  Java 1.7 can support TLS 1.2, though will by default negotiate TLS 1.0 unless explicitly instructed otherwise:

http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html#tlsprotonote

Footnote 1 - Although SunJSSE in the Java SE 7 release supports TLS 1.1 and TLS 1.2, neither version is enabled by default for client connections. Some servers do not implement forward compatibility correctly and refuse to talk to TLS 1.1 or TLS 1.2 clients.

Oracle’s acknowledgement of the BEAST exploit when using TLS 1.0 with CBC (Cipher Block Chaining) is part of CVE-2011-3389:

CVE-2011-3389 Java Runtime Environment SSL/TLS JSSE Yes 4.3 Network Medium None Partial None None JDK and JRE 7, 6 Update 27 and before, 5.0 Update 31 and before, 1.4.2_33 and before.
JRockit R28.1.4 and before

This is a vulnerability in the SSLv3/TLS 1.0 protocol. Exploitation of this vulnerability requires a man-in-the-middle and the attacker needs to be able to inject chosen plaintext.

The links below describe the attack:
https://blog.torproject.org/blog/tor-and-beast-ssl-attack
http://blogs.cisco.com/security/beat-the-beast-with-tls

To combat the exploit, the fix Oracle did was to split each write() to the underlying OutputStream in to at least two separate TLS records with every record having a different initialization vector.  TLS itself caps the maximum record size at 16384 (this is the size of the raw unencrypted bytes).  http://blog.fourthbit.com/2014/12/23/traffic-analysis-of-an-ssl-slash-tls-session
So a write of 16k of client data to the underlying OutputStream at a time with the fix above would result in one TLS record containing the first byte encrypted, and the second TLS record containing the remaining 16383 bytes encrypted. Whereas a write of 32k of client data to the underlying OutputStream at a time would result in three TLS records, one containing the first byte encrypted, the second containing the next 16384 bytes, and the third containing the remaining 16383 bytes encrypted.  So when using TLS 1.0 with CBC, the bigger the buffer associated with the write, the fewer one byte encrypted TLS records you are going to see.

To give you an idea of effect that buffer size plays with TLS 1.0 and CBC when the JVM has the fix for BEAST applied:
Assuming a file size of 31527359 (~ 30 Megabytes)
with 16k buffer: 16384 = 1 + 16383 ; 31527359 / 16384 = ~1924 ; so 1924 one byte ssl records, 1924 x 16383 byte ssl records
with 32k buffer: 32768 = 1 + 16384 + 16383; 31527359 / 32768 = ~962 ; so 962 one byte ssl records, 962 x 16384 byte records, and 962 x 16383 byte records
with 64k buffer 65536 = 1 + 16384 + 16384 + 16384 + 16383; 31527359 / 65536 = ~481 ; so 481 one byte ssl records, 3*481*16384 byte records, and 481 x 16383 byte records
with 256k buffer 262144 = 1 + 15*16384 + 16383; 31527359 / 262144 = ~120; so 120 one byte ssl records, 15*120*16384 byte records, and 120 x 16383 byte records
..
So to summarize for the 30 megabyte file, buffer size and resulting one-byte ssl records
16k: 1924 one byte ssl records
32k: 962 one byte ssl records
64k: 481 one byte ssl records
256k: 120 one byte ssl records
512k: 60 one byte ssl records
1024k buffer: 30 one byte ssl records

Each SSL record obviously has a reasonable amount of processing time, both client to encrypt/hash, network from a TCP perspective, and server to validate/decrypt the SSL payload.
So ideally going forward Java 1.8 using TLS 1.2 is what you want to strive for.   If stuck with TLS 1.0, then the large buffer will definitely help with performance.

Tuesday, May 6, 2014

Java split a large file – sample code – high performance

 

Sample Java code to split a source file into chunks.

I needed a quick way to split big log files in to manageable chunks that could subsequently be opened with my legacy editor without hitting out-of-memory errors.

I did not trust the available freeware solutions HJSplit / FFSJ etc due to the bad VirusTotal.com reports indicating potential malware.

So I coded my own using java NIO (Non-Blocking I/O) which provides excellent performance.

Source code follows:

import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;

import java.nio.ByteBuffer;

import java.nio.channels.FileChannel;

/**
* Source code to split a file in to chunks using java nio.
*
* YYYY-MM-DD
* 2014-05-06 mshannon - created.
*/
public class Split
{
public static void main(String[] args) throws IOException
{
  long splitSize = 128 * 1048576; // 128 Megabytes file chunks
  int bufferSize = 256 * 1048576; // 256 Megabyte memory buffer for reading source file

  // String source = args[0];
  String source = "/C:/Users/mshannon/Desktop/18597996/UCMTRACE/idccs_UCM_server1_1398902885000.log";

  // String output = args[1];
  String output = "/C:/Users/mshannon/Desktop/18597996/UCMTRACE/idccs_UCM_server1_1398902885000.log.split";

  FileChannel sourceChannel = null;
  try
  {
   sourceChannel = new FileInputStream(source).getChannel();

   ByteBuffer buffer = ByteBuffer.allocateDirect(bufferSize);

   long totalBytesRead = 0; // total bytes read from channel
   long totalBytesWritten = 0; // total bytes written to output

   double numberOfChunks = Math.ceil(sourceChannel.size() / (double) splitSize);
   int padSize = (int) Math.floor(Math.log10(numberOfChunks) + 1);
   String outputFileFormat = "%s.%0" + padSize + "d";

   FileChannel outputChannel = null; // output channel (split file) we are currently writing
   long outputChunkNumber = 0; // the split file / chunk number
   long outputChunkBytesWritten = 0; // number of bytes written to chunk so far

   try
   {
    for (int bytesRead = sourceChannel.read(buffer); bytesRead != -1; bytesRead = sourceChannel.read(buffer))
    {
     totalBytesRead += bytesRead;

     System.out.println(String.format("Read %d bytes from channel; total bytes read %d/%d ", bytesRead,
      totalBytesRead, sourceChannel.size()));

     buffer.flip(); // convert the buffer from writing data to buffer from disk to reading mode

     int bytesWrittenFromBuffer = 0; // number of bytes written from buffer

     while (buffer.hasRemaining())
     {
      if (outputChannel == null)
      {
       outputChunkNumber++;
       outputChunkBytesWritten = 0;

       String outputName = String.format(outputFileFormat, output, outputChunkNumber);
       System.out.println(String.format("Creating new output channel %s", outputName));
       outputChannel = new FileOutputStream(outputName).getChannel();
      }

      long chunkBytesFree = (splitSize - outputChunkBytesWritten); // maxmimum free space in chunk
      int bytesToWrite = (int) Math.min(buffer.remaining(), chunkBytesFree); // maximum bytes that should be read from current byte buffer

      System.out.println(
       String.format(
        "Byte buffer has %d remaining bytes; chunk has %d bytes free; writing up to %d bytes to chunk",
         buffer.remaining(), chunkBytesFree, bytesToWrite));

      buffer.limit(bytesWrittenFromBuffer + bytesToWrite); // set limit in buffer up to where bytes can be read

      int bytesWritten = outputChannel.write(buffer);

      outputChunkBytesWritten += bytesWritten;
      bytesWrittenFromBuffer += bytesWritten;
      totalBytesWritten += bytesWritten;

      System.out.println(
       String.format(
        "Wrote %d to chunk; %d bytes written to chunk so far; %d bytes written from buffer so far; %d bytes written in total",
         bytesWritten, outputChunkBytesWritten, bytesWrittenFromBuffer, totalBytesWritten));

      buffer.limit(bytesRead); // reset limit

      if (totalBytesWritten == sourceChannel.size())
      {
       System.out.println("Finished writing last chunk");

       closeChannel(outputChannel);
       outputChannel = null;

       break;
      }
      else if (outputChunkBytesWritten == splitSize)
      {
       System.out.println("Chunk at capacity; closing()");

       closeChannel(outputChannel);
       outputChannel = null;
      }
     }

     buffer.clear();
    }
   }
   finally
   {
    closeChannel(outputChannel);
   }
  }
  finally
  {
   closeChannel(sourceChannel);
  }

}

private static void closeChannel(FileChannel channel)
{
  if (channel != null)
  {
   try
   {
    channel.close();
   }
   catch (Exception ignore)
   {
    ;
   }
  }
}
}

Thursday, February 13, 2014

Two-way SSL guide: Java, Android, Browser clients and WebLogic Server

The notes below outline the steps I took to test two-way SSL from scratch using updated keytool functionality found in Java 7.  Rather than use a commercial certificate authority like VeriSign (which costs real money), my notes show how to generate your own CA and all PKI artefacts using just the keytool command.  These artefacts can subsequently be utilized for development / testing / private-network scenarios.  Note keytool is simple a CLI / console program shipped with the Java JDK / JRE that wraps underling java security/crypto classes.

If you can follow these steps and understand the process, then transitioning to a commercial trusted certificate authority like VeriSign should be straightforward.

In my previous article http://todayguesswhat.blogspot.com/2012/07/weblogic-https-one-way-ssl-tutorial.html I state:

One-way SSL is the mode which most "storefronts" run on the internet so as to be able to accept credit card details and the like without the customer’s details being sent effectively in the clear from a packet-capture perspective.  In this mode, the server must present a valid public certificate to the client, but the client is not required to present a certificate to the server.

With Two-way SSL trust is enhanced by now requiring both the server and the client present valid certificates to each other so as to prove their identity.

From an Oracle WebLogic Server perspective, two-way SSL enables the server to only* accept incoming SSL connections from clients whom can present a public certificate that can be validated based on the contents of the server’s configured trust store.

*Assuming WebLogic Server  is configured with “Client Certs Requested And Enforced” option.

The actual certificate verification process itself is quite detailed and would make a good future blog post. RFC specifications of interest are RFC 5280 (which obsoletes RFC 3280) and RFC 2818 and RFC 6125.

WebLogic server can also be configured to subsequently authenticate the client based on some attribute (such as cn – common name) extracted from the client’s validated X509 certificate by configuring the Default Identity Asserter; this is commonly known as certificate authentication.  This is not mandatory however - Username/password authentication (or any style for that matter) can still be leveraged on top of a two-way SSL connection.

Now let’s get on with it …

Why do we need Java 7 keytool support?  Specifically for signing certificate requests, and also to be able to generate a keypair with custom x509 extension such as SubjectAlternativeName / BasicConstraints etc.

High-level, we need the following:
Custom Certificate Authority
Server Certificate signed by Custom CA
Client Certificate signed by Custom CA

Artifacts required for two-way SSL to support WebLogic server and various clients types (Browser / Java etc):
Server keystore in JKS format
Server truststore in JKS format
Client keystore in JKS format
Client truststore in JKS format
Client keystore in PKCS12 keystore format
Client truststore in PKCS12 format
CA certificate in PEM format

Note:  Browsers and Mobile devices typically want public certificates in PEM format and keypairs (private key/public key) in PKCS12 format.
Java clients on the other hand generally use JKS format keystores

Steps below assume Linux zsh

Constants – edit accordingly

CA_P12_KEYSTORE_FILE=/tmp/ca.p12
CA_P12_KEYSTORE_PASSWORD=welcome1
CA_KEY_ALIAS=customca
CA_KEY_PASSWORD=welcome1
CA_DNAME="CN=CustomCA, OU=MyOrgUnit, O=MyOrg, L=MyTown, ST=MyState, C=MyCountry"
CA_CER=/tmp/ca.pem

SERVER_JKS_KEYSTORE_FILE=/tmp/server.jks
SERVER_JKS_KEYSTORE_PASSWORD=welcome1
SERVER_KEY_ALIAS=server
SERVER_KEY_PASSWORD=welcome1
SERVER_DNAME="CN=www.acme.com"
SERVER_CSR=/tmp/server_cert_signing_request.pem
SERVER_CER=/tmp/server_cert_signed.pem
SERVER_JKS_TRUST_KEYSTORE_FILE=/tmp/server-trust.jks
SERVER_JKS_TRUST_KEYSTORE_PASSWORD=welcome1

CLIENT_JKS_KEYSTORE_FILE=/tmp/client.jks
CLIENT_JKS_KEYSTORE_PASSWORD=welcome1
CLIENT_KEY_ALIAS=client
CLIENT_KEY_PASSWORD=welcome1
CLIENT_DNAME="CN=mshannon, OU=MyOrgUnit, O=MyOrg, L=MyTown, ST=MyState, C=MyCountry"
CLIENT_CSR=/tmp/client_cert_signing_request.pem
CLIENT_CER=/tmp/client_cert_signed.pem
CLIENT_JKS_TRUST_KEYSTORE_FILE=/tmp/client-trust.jks
CLIENT_JKS_TRUST_KEYSTORE_PASSWORD=welcome1
CLIENT_P12_KEYSTORE_FILE=/tmp/client.p12
CLIENT_P12_KEYSTORE_PASSWORD=welcome1

Verify version of Java

(/usr/java/jre/jre1.7.0_45)% export JAVA_HOME=`pwd`
(/usr/java/jre/jre1.7.0_45)% export PATH=$JAVA_HOME/bin:$PATH
(/usr/java/jre/jre1.7.0_45)% java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

Create CA, Server and Client keystores

# -keyalg - Algorithm used to generate the public-private key pair - e.g. DSA
# -keysize - Size in bits of the public and private keys
# -sigalg - Algorithm used to sign the certificate - for DSA, this would be SHA1withDSA, for RSA, SHA1withRSA
# -validity - Number of days before the certificate expires
# -ext bc=ca:true - WebLogic/Firefox require X509 v3 CA certificates to have a Basic Constraint extension set with field CA set to TRUE
# without the bc=ca:true , firefox won't allow us to import the CA's certificate.

# look after the CA_P12_KEYSTORE_FILE - it will contain our CA private key and should be locked away!

keytool -genkeypair -v -keystore "$CA_P12_KEYSTORE_FILE" \
-storetype PKCS12 -storepass "$KEYSTORE_PASSWORD" \
-keyalg RSA -keysize 1024 -validity 1825 -alias "$CA_KEY_ALIAS" -keypass "$CA_KEY_PASSWORD" -dname "$CA_DNAME" \
-ext "bc=ca:true"

keytool -genkeypair -v -keystore "$SERVER_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$SERVER_JKS_KEYSTORE_PASSWORD" \
-keyalg RSA -keysize 1024 -validity 1825 -alias "$SERVER_KEY_ALIAS" -keypass "$SERVER_KEY_PASSWORD" -dname "$SERVER_DNAME"

keytool -genkeypair -v -keystore "$CLIENT_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$CLIENT_JKS_KEYSTORE_PASSWORD" \
-keyalg RSA -keysize 1024 -validity 1825 -alias "$CLIENT_KEY_ALIAS" -keypass "$CLIENT_KEY_PASSWORD" -dname "$CLIENT_DNAME"

Export CA certificate

# -rfc - means to output in PEM (rfc style) base64 encoded format, output will look like ----BEGIN.... etc

keytool -exportcert -v -keystore "$CA_P12_KEYSTORE_FILE" \
-storetype PKCS12 -storepass "$KEYSTORE_PASSWORD" \
-alias "$CA_KEY_ALIAS" -file "$CA_CER" -rfc

Generate certificate signing requests for client and server (to be supplied to CA for subsequent signing)

# The public certificates for the client and server keypairs created above are currently self-signed
# (such that, issuer = subject , private key signed its associated public certificate)
# For a production server we want to get our public certificate signed by a valid certificate authority (CA).
# We are going to use the customca we created above to sign these certificates.
# We first need to get a certificate signing request ready ...

keytool -certreq -v -keystore "$SERVER_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$SERVER_JKS_KEYSTORE_PASSWORD" \
-alias "$SERVER_KEY_ALIAS" -keypass "$SERVER_KEY_PASSWORD" -file "$SERVER_CSR"

keytool -certreq -v -keystore "$CLIENT_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$CLIENT_JKS_KEYSTORE_PASSWORD" \
-alias "$CLIENT_KEY_ALIAS" -keypass "$CLIENT_KEY_PASSWORD" -file "$CLIENT_CSR"

Sign certificate requests

keytool -gencert -v -keystore "$CA_P12_KEYSTORE_FILE" \
-storetype PKCS12 -storepass "$KEYSTORE_PASSWORD" \
-validity 1825 -alias "$CA_KEY_ALIAS" -keypass "$CA_KEY_PASSWORD" \
-infile "$SERVER_CSR" -outfile "$SERVER_CER" -rfc

keytool -gencert -v -keystore "$CA_P12_KEYSTORE_FILE" \
-storetype PKCS12 -storepass "$KEYSTORE_PASSWORD" \
-validity 1825 -alias "$CA_KEY_ALIAS" -keypass "$CA_KEY_PASSWORD" \
-infile "$CLIENT_CSR" -outfile "$CLIENT_CER" -rfc

Import signed certificates

# Once we complete the signing, the certificate is no longer self-signed, but rather signed by our CA.
# the issuer and owner are now different. 

# Now we are ready to imported the signed public certificates back in to the keystores ...

# keytool prevents us from importing a certificate should it not be able to verify the full signing chain.
# As we leveraged our custom CA, we need to import the CA's public certificate in to our keystore as a
# trusted certificate authority prior to importing our signed certificate.
# otherwise - 'Failed to establish chain from reply' error will occur

keytool -import -v -keystore "$SERVER_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$SERVER_JKS_KEYSTORE_PASSWORD" \
-alias "$CA_KEY_ALIAS" -file "$CA_CER" -noprompt

keytool -import -v -keystore "$CLIENT_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$CLIENT_JKS_KEYSTORE_PASSWORD" \
-alias "$CA_KEY_ALIAS" -file "$CA_CER" -noprompt

keytool -import -v -keystore "$SERVER_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$SERVER_JKS_KEYSTORE_PASSWORD" \
-alias "$SERVER_KEY_ALIAS" -keypass "$SERVER_KEY_PASSWORD" -file "$SERVER_CER" -noprompt

keytool -import -v -keystore "$CLIENT_JKS_KEYSTORE_FILE" \
-storetype JKS -storepass "$CLIENT_JKS_KEYSTORE_PASSWORD" \
-alias "$CLIENT_KEY_ALIAS" -keypass "$CLIENT_KEY_PASSWORD" -file "$CLIENT_CER" -noprompt

Create trust stores

# for one-way SSL
# the trust keystore for the client needs to include the certificate for the trusted certificate authority that signed the certificate for the server

# for two-way SSL
# the trust keystore for the client needs to include the certificate for the trusted certificate authority that signed the certificate for the server
# the trust keystore for the server needs to include the certificate for the trusted certificate authority that signed the certificate for the client

# for two-way SSL connection, the client verifies the identity of the server and subsequently passes its certificate to the server.
# The server must then validate the client identity before completing the SSL handshake

# given our client and server certificates are both issued by the same CA, the trust stores for both will just contain the custom CA cert

keytool -import -v -keystore "$SERVER_JKS_TRUST_KEYSTORE_FILE" \
-storetype JKS -storepass "$SERVER_JKS_TRUST_KEYSTORE_PASSWORD" \
-alias "$CA_KEY_ALIAS" -file "$CA_CER" -noprompt

keytool -import -v -keystore "$CLIENT_JKS_TRUST_KEYSTORE_FILE" \
-storetype JKS -storepass "$CLIENT_JKS_TRUST_KEYSTORE_PASSWORD" \
-alias "$CA_KEY_ALIAS" -file "$CA_CER" -noprompt

Create PKCS12 formats for Android / Browser clients

# note - warning will be given that the customca public cert entry cannot be imported
# TrustedCertEntry not supported
# this is expected
# http://docs.oracle.com/javase/7/docs/technotes/guides/security/crypto/CryptoSpec.html#KeystoreImplementation
# As of JDK 6, standards for storing Trusted Certificates in "pkcs12" have not been established yet

keytool -importkeystore -v \
-srckeystore "$CLIENT_JKS_KEYSTORE_FILE" -srcstoretype JKS -srcstorepass "$CLIENT_JKS_KEYSTORE_PASSWORD" \
-destkeystore "$CLIENT_P12_KEYSTORE_FILE" -deststoretype PKCS12 -deststorepass "$CLIENT_P12_KEYSTORE_PASSWORD" \
-noprompt

Configuring WebLogic server for two-way SSL

cp "$SERVER_JKS_KEYSTORE_FILE" /u01/app/oracle/product/Middleware/wlserver_10.3/server/lib
cp "$SERVER_JKS_TRUST_KEYSTORE_FILE" /u01/app/oracle/product/Middleware/wlserver_10.3/server/lib

In order to leverage the above keystores, it is just a matter of connecting to the Administration Console, then expand Environment and select Servers.
Choose the server for which you want to configure the identity and trust keystores, and select Configuration > Keystores.
Change Keystores to be "Custom Identity and Custom Trust"

You would then fill in the relevant fields (keystore fully qualified path, type (JKS), and keystore access password). e.g.

Custom Identity Keystore: /u01/app/oracle/product/Middleware/wlserver_10.3/server/lib/server.jks
Custom Identity Keystore Type: JKS
Custom Identity Keystore Passphrase: welcome1

Custom Trust Keystore: /u01/app/oracle/product/Middleware/wlserver_10.3/server/lib/server-trust.jks
Custom Trust Keystore Type: JKS
Custom Trust Keystore Passphrase: welcome1

SAVE

Next, you need to enable the SSL Listen Port for the server:
[Home >Summary of Servers >XXX > ] Configuration > General.

SSL Listen Port: Enabled (Check)
SSL Listen Port: XXXX

SAVE

Next, you need to tell WebLogic the alias and password in order to access the private key from the Identity Store:
[Home >Summary of Servers >XXX > ] Configuration > SSL.

Identity and Trust Locations: Keystores
Private Key Alias: server
Private Key Passphrase: welcome1

SAVE

click Advanced at the bottom of the page ([Home >Summary of Servers >XXX > ] Configuration > SSL_
Set the Two Way Client Cert Behaviour attribute to "Client Certs Requested And Enforced"

    Client Certs Not Requested: The default (meaning one-way SSL).
    Client Certs Requested But Not Enforced: Requires a client to present a certificate. If a certificate is not presented, the SSL connection continues.
    Client Certs Requested And Enforced: Requires a client to present a certificate. If a certificate is not presented, the SSL connection is terminated.


Check "Use JSSE SSL" box
Failure to check this box above will likely result in "Cannot convert identity certificate" error when restarting the managed server, and the HTTPS port won't be open for connections.

SAVE

Restart server.

Configuring Browser client

From Internet Explorer > Tools > Internet Options > Contents > Certificates

Import under the Trusted Root Certification Authorities tab the ca.pem file
- Place all certificate in the following store: "Trusted Root Certification Authorities"
It will popup Security Warning - You are about to install a certificate from a certification authority (CA) claiming to represent.....
Do you want to install this certificate: Yes

Import under the Personal tab the client.p12 file
It will popup with 'To maintain security, the private key was protected with a password'
Supply the password for the private key: welcome1
- Place all certificate in the following store: "Personal"

-----------------

From Firefox > Tools > Options > Advanced > Encryption > View Certificates
From the "Authorities" tab  , import ca.pem
check "Trust this CA to identify websites"
ok.
(It will get listed under the organization "MyOrg" in the tree based on the certificate dname that was leveraged)
From the "Your Certificates" tab, import client.p12
Enter the password that was used to encrypt this certificate backup: welcome1
It should state "Successfully restored your security certificate(s) and private key(s)."

Accessing a server configured for two-way SSL from a Browser

IE - May automatically supply the certificate to the server
Firefox -
"This site has requested that you identify yourself with a certificate"
"Choose a certificate to present as identification"
x Remember this decision

Accessing a server configured for two-way SSL from a Java client

Invoke Java with explicit SSL trust store elements set (to trust server's certificate chain) ...

System.setProperty("javax.net.ssl.trustStore", "/C:/Users/mshannon/Desktop/client-trust.jks");
System.setProperty("javax.net.ssl.trustStoreType", "JKS");
System.setProperty("javax.net.ssl.trustStorePassword", "welcome1");

Otherwise will see...
"javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"

Given we are using two-way SSL, we must also specify the client certificate details ...

System.setProperty("javax.net.ssl.keyStore", "/C:/Users/mshannon/Desktop/client.jks");
System.setProperty("javax.net.ssl.keyStoreType", "JKS");
System.setProperty("javax.net.ssl.keyStorePassword", "welcome1");

Otherwise will see...
"javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate"

Unknown Information – Is it possible through System properties to specify which key (based on alias) to use from the client keystore, and also can a password be provided for such a key (alias)?
Currently we are relying on the fact that the client keystore just contains the one key-pair and that key's alias password entry matches the keystore password!

Configuring WebLogic Server to allow authentication using the client certificate

The above steps should have resulted in two-way SSL transport security.  WebLogic however can also be configured to extract the client username from the client supplied public certificate and map this to a user in the identity store.  To do this, we need to configure Default Identity Asserter to allow authentication using the client certificate.  The deployed web application must also allow client certificate authentication as a login method.

Configuring Default Identity Asserter to support X.509 certificates involves:

1) Connecting to the WebLogic Server Administration Console as an administrator (e.g. weblogic user)
2) Navigating to appropriate security realm (e.g. myrealm)
3) Under the Providers tab and Authentication sub-tab selecting DefaultIdentityAsserter (WebLogic Identity Assertion provider)
4) On the Settings for DefaultIdentityAsserter page under the Configuration tab and Common sub-tab choosing X.509 as a supported token type and clicking Save.
5) On the Settings for DefaultIdentityAsserter page under the Configuration tab and Provider-Specific sub-tab enabling the Default User Name Mapper and choosing the appropriate X.509 attribute within the subject DN to leverage as the username and correcting the delimiter as appropriate and clicking Save.
6) Restarting the WebLogic Server.

The deployed web application must also list CLIENT-CERT as an authentication method in the web.xml descriptor file:

<login-config>
    <auth-method>CLIENT-CERT</auth-method>
</login-config>

image image

Thursday, August 22, 2013

Synology Remote Shutdown Poweroff BusyBox

Being the energy conserving environmentalist I am (read – tightass) I look for ways to reduce unnecessary power consumption. I have quite a few devices that are plugged in 24x7, but get limited use.  This includes my HTPC setup that comprises a Mac Mini i5 (bootcamp Windows 8) and a Synology DS413j NAS.  In the previous blog post I outlined costs involved in running the Synology DS413j.  The DS413j does not support system hibernation, only disk hibernation.  From my testing, it pulls about 11w continuous when the hard disk was hibernated.  In the grand scheme of things, this is not a big deal.

So what can be done to reduce power consumption?  Synology supports scheduled power-on / power-off events through their DSM.  There is also the very useful Advanced Power Manager package that takes this a step further by preventing the shutdown whilst any detected network/disk activity is above a specific threshold.  This way, if you are a watching a movie and midway reach the scheduled power-off time, the NAS won’t shutdown until such time disk/network activity falls below the configured threshold.

Advanced Power Manager

In the screenshot above, I have configured Power Off times for every day of the week, but not necessarily power on times.  This means the NAS will always shutdown at night if running, but not necessarily start again in the morning automatically.

Whilst the above package is great, I wanted to go a step further and support remote shutdown of the NAS triggered by a desktop shortcut and/or remote control event.  In my HTPC setup, I’m using EventGhost to coordinate infrared remote control events to specific actions. I leverage the On Screen Menu EventGhost plugin capability to display a menu of options rendered on my (television) screen for interacting with the HTPC.  This includes launching XBMC, Suspending and Shutting down the Mac Mini, Sending a Wake On Lan packet to the Synology.  I want to add a new option to this menu to shutdown the NAS.

One would think remote shutdown is pretty simple.  In fact it can be very simple, by simply Enabling SSH on the NAS, and then leveraging something like putty to make a “root” user connection to the NAS supplying a  “Remote command” option like the following

image

You would simply save the putty session as some specific name (e.g root-shutdown), then trigger it using a shortcut link such "C:\Program Files (x86)\PuTTY\PUTTY.EXE" -load "root-shutdown"

But what if you want to grant the ability to power off the NAS to some non-root user?  Would it not be great to have some user, say the fictitious user “netshutdown”, who simply by connecting to the NAS through telnet / SSH would result in the NAS shutting down?

This would be easily accomplished using something like sudo whereby administration commands can easily be delegated.  However “sudo” is not available in the standard DSM 4.2 install on the DS413j.  Simple then, lets leverage suid on the poweroff executable so that it runs as root.  However, the poweroff executable is actually a symbolic link to /bin/busybox.  Setting suid on busybox also makes no difference:

image

Setting suid on busybox does however allow “su” to function from a non-root user.

image

I’m not comfortable setting suid on the busybox executable given pretty much every command in /bin and a number from /sbin are linked to it.  One extreme word of caution!!! Do not make the mistake of executing chmod a-x on the busybox executable or anything linked to it!  You will hose your system.  I was extremely lucky to have perl installed on my NAS, and had not logged out from my root session, and was able to leverage perl’s chmod function to restore pemissions! (what a relief)

image

If you search for busybox, poweroff, and suid, you find a number of results that talk about employing techniques such as /etc/busybox.conf to call out specific applets and whom can run them, creating c wrapper programs that leverage execve to call busybox, or setuid and system to call /sbin/poweroff etc.  I tried all of these and none of them worked with the compiled busybox executable on my NAS; I would receive the Permission denied / Operation not permitted errors.

Finally however, I cracked it.

I created a wrapper program that rather than call /sbin/poweroff, calls a shell script owned by root, which in turn triggers poweroff.

The program (shutdown.c) is as follows:

#include <stdio.h>     /* printf function */
#include <stdlib.h>    /* system function */
#include <sys/types.h> /* uid_t type used by setuid */
#include <unistd.h>    /* setuid function */

int main()
{
  printf("Invoking /bin/shutdown.sh ...\n");
  setuid(0);
  system("/bin/shutdown.sh" );
  return 0;
}

/bin/shutdown.sh is as follows:
echo Triggering poweroff command
/sbin/poweroff

I followed the developer guide to determine how to compile c programs specific for the NAS.  In the case of the DS413j, It leverages a Marvell Kirkwood mv6282 ARMv5te CPU.  So I needed to leverage the toolchain for Marvell 88F628x.

As I had no native/physical linux machine available for the compilation, I decided to use VirtualBox on Windows and download / leverage the Lubuntu 12.10 VirtualBox image which is a lightweight version of Ubuntu.  I set the network card to bridged, started the image, authenticated as lubuntu/lubuntu and updated/added some core packages:

sudo apt-get update
sudo apt-get install build-essential dkms gcc make
sudo apt-get install linux-headers-$(uname -r)

sudo -s
cd /tmp
wget
http://sourceforge.net/projects/dsgpl/files/DSM%204.1%20Tool%20Chains/Marvell%2088F628x%20Linux%202.6.32/gcc421_glibc25_88f6281-GPL.tgz
tar zxpf gcc421_glibc25_88f6281-GPL.tgz -C /usr/local/

cat > /tmp/shutdown.c <<EOF
#include <stdio.h>     /* printf function */
#include <stdlib.h>    /* system function */
#include <sys/types.h> /* uid_t type used by setuid */
#include <unistd.h>    /* setuid function */

int main()
{
  printf("Invoking /bin/shutdown.sh ...\n");
  setuid(0);
  system("/bin/shutdown.sh" );
  return 0;
}
EOF

/usr/local/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc shutdown.c -o shutdown

lubuntu@lubuntu-VirtualBox:/tmp$ /usr/local/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc shutdown.c -o shutdown
lubuntu@lubuntu-VirtualBox:/tmp$ ls -ltr
total 16
-rw-rw-r-- 1 lubuntu lubuntu  308 Aug 22 01:29 shutdown.c
-rwxrwxr-x 1 lubuntu lubuntu 6715 Aug 22 01:29 shutdown
lubuntu@lubuntu-VirtualBox:/tmp$

Now that the shutdown executable was created, I uploaded it to the NAS to the /bin directory. 

Next I needed the user/group account infrastructure in place on the NAS in order to trigger it.  I created a user named “netshutdown” and a group named “shutdown” using the DiskStation Web UI Control Panel User/Group widgets.  I also made sure the SSH service was enabled (Control Panel > (Network Services >) Terminal > Enable SSH Service).

If you try and SSH leveraging username/password authentication to the NAS as the newly created user, you will see that you are not presented with a shell.  This is because Synology locks the user down, which can be seen by viewing the passwd file connected as root:

media> cat /etc/passwd
root:x:0:0:root:/root:/bin/ash
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
ftp:x:21:21:Anonymous FTP User:/nonexist:/sbin/nologin
anonymous:x:21:21:Anonymous FTP User:/nonexist:/sbin/nologin
smmsp:x:25:25:Sendmail Submission User:/var/spool/clientmqueue:/sbin/nologin
postfix:x:125:125:Postfix User:/nonexist:/sbin/nologin
dovecot:x:143:143:Dovecot User:/nonexist:/sbin/nologin
spamfilter:x:783:1023:Spamassassin User:/var/spool/postfix:/sbin/nologin
nobody:x:1023:1023:nobody:/home:/sbin/nologin
admin:x:1024:100:System default user:/var/services/homes/admin:/bin/sh
guest:x:1025:100:Guest:/nonexist:/bin/sh
mshannon:x:1026:100::/var/services/homes/mshannon:/sbin/nologin
netshutdown:x:1027:100::/var/services/homes/netshutdown:/sbin/nologin

Notice the netshutdown user has the shell set as “/sbin/nologin”, and the home directory set to “/var/services/homes/netshudown”.  There is no such “homes” directory on my instance.

I edited /etc/passwd and changed
netshutdown:x:1027:100::/var/services/homes/netshutdown:/sbin/nologin
to
netshutdown:x:1027:65536::/home/netshutdown:/bin/sh

65536 is the group id of the new shutdown group:

media> cat /etc/group
#$_@GID__INDEX@_$65536$
root:x:0:
lp:x:7:lp
ftp:x:21:ftp
smmsp:x:25:admin,smmsp
users:x:100:
administrators:x:101:admin
postfix:x:125:postfix
maildrop:x:126:
dovecot:x:143:dovecot
nobody:x:1023:
shutdown:x:65536:netshutdown

I then created the home directory for the user:
mkdir -p /home/netshutdown
chown netshutdown /home/netshutdown

Test it out …

media> su - netshutdown

BusyBox v1.16.1 (2013-04-16 20:15:54 CST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

media> pwd
/home/netshutdown

My user/group was now in place, so it was time to set permissions and configure the shutdown executable and shell script :-

cd /bin
chown root.shutdown shutdown
chmod 4750 shutdown

media> ls -la shutdown*
-rwsr-x---    1 root     shutdown      6715 Aug 22 09:31 shutdown

cat > /bin/shutdown.sh <<EOF
echo Triggering poweroff command
/sbin/poweroff
EOF

media> ls -la shutdown.sh
-rw-r--r--    1 root     root            48 Aug 22 09:35 shutdown.sh

chmod 700 shutdown.sh

media> ls -la shutdown*
-rwsr-x---    1 root     shutdown      6715 Aug 22 09:31 shutdown
-rwx------    1 root     root            48 Aug 22 09:35 shutdown.sh

For the netshutdown user to automatically trigger the shutdown executable on connection, I had a few different options:

Option 1) Change the user’s login shell from /bin/sh to be /bin/shutdown
Option 2) Create a .profile file for the user, and trigger the /bin/shutdown command from the user’s .profile

I decided on the latter.

echo "/bin/shutdown" > /home/netshutdown/.profile
chown netshutdown /home/netshutdown/.profile

Now it was time to test the power off …

image

At this stage, we can now easily create a new putty session that connects via SSH as user netshutdown with username/password authentication.  We can invoke that saved session using a shortcut such as PUTTY.EXE -load "XXX Sesion Name”.

If you are a sucker for punishment, you can make this thing a bit more complex by utilizing public key authentication (rather than username/password).  In the most simple form, you leverage a tool such as puttygen.exe to generate a keypair (private key and public certificate) for a particular user.  You enable public key authentication on your NAS, and upload the public certificate of the user to the NAS. You then configure putty to authenticate using public key authentication and point it to the location of your private key.  You can go a step further by encrypting your private key so that a passphrase must be supplied on login in order to extract the private key and authenticate.  You can also utilize the Putty Authentication Agent (Pageant) to store the decrypted private key in memory, and have putty sessions consult Pageant for the private key on authentication.  This blog post does a good job of describing the options.

To get public key authentication up and running quickly on your NAS follow steps similar to the following :

1) Run puttygen.exe
2) Generate new SSH-2 RSA Key without a keyphrase
3) Save public key and private key to files (e.g publickey.txt and privatekey.ppk)
4) (This should have already been done) From Synology DiskStation UI, Go to Control Panel > (Network Services >) Terminal > Enable SSH Service

5) Next SSH using putty.exe to the NAS as the root user

6) Edit /etc/ssh/sshd_config

uncomment the following two lines
#PubkeyAuthentication yes
#AuthorizedKeysFile     .ssh/authorized_keys

and save the file

7) Connect as the end-user concerned, and create ~/.ssh directory, and create the file ~/.ssh.ssh/authorized_keys

8) Add the public key text from above in to the authorized_keys file and save it ...

ssh-rsa AAAAB3Nza......== ....

9) Change the permissions
chmod 700 ~/.ssh
chmod 644 ~/.ssh/authorized_keys

10) Open Putty and make the following changes to the session ...

Connection type: SSH
Connection->Data->Auto-login username: netshutdown
Connection->SSH->Auth->Private Key: Your Private Keyfile from above

11) Save the session - e.g. netshutdown-media-nas

12) Create a new shortcut on your desktop with the following target "C:\Program Files\PuTTY\putty.exe" -load "netshutdown-media-nas"

Notes - The private key above contains no passphrase, and is essentially equivalent to having a password in clear text stored in a text file on the desktop.  Where security is required, configure the private key with a passphrase.  You can then run/utilize Pageant to store effectively the unencrypted private key in memory by supplying the encryption passphrase; this then prevents the continuous password prompting on establishing each new session.