Monday, October 27, 2014

JavaScript and Working With the Google Maps JavaScript API

The Introduction
I am the first to admit that JavaScript is one of my least favorite languages to work with.  It could have been the inability to really debug the code - unless you count a bunch of "alert('i am here')" within the code.  Or if it was my OCD and experience in software development to that point that abhorred loosely coupled data types and JavaScript felt a bit too loose for my tastes.  It could also have been the poor documentation.  Either way I've arrived at a reasonable working relationship with the language.  I've been given (and found) better tools to debug the script, I've learned to accept the loosely typed nature of the language (and even that has gotten better), and the documentation has gotten significantly better.

So recently I had a real opportunity to stretch my JavaScript skills beyond simple $.ajax() calls and form validation.  My current employer is attempting to put together a web site where our business customers can examine the status of their services in real-time, put in trouble tickets, and even follow up on status for trouble tickets submitted either by them or on their behalf.  One the visual aspects of this request is the ability to map where those services exist.  What I mean by services is advanced products called circuits used to carry large amounts of voice or data - think between 10MB to 1GB pipes.  The industry term used for the location of these circuits is "A to Z addresses".  You could likely take a guess at what that means - each circuit has a start location and an end location.  If the circuit is simply a connection between the network provider and the customer's location the circuit will only have an "A" location.  The "Z" location is inferred to be at the local office of the network provider.  On the other hand if the circuit is between two different offices it would have an "A" location and a "Z" location.  Each service the customer has installed can be comprised of one to hundreds of circuits.

The Solution
The data that is stored about these circuits is used a lot and the quality is extremely high and is stored in a network inventory system (NIS).  In our business it is required to know where a circuit is installed, what equipment is located at the end points, and even if portions of the circuit is being leased from another provider.  So getting the services and the A to Z locations for each circuit is relatively straight forward.  The disappointing part here is that the NIS doesn't store the latitude or longitude of the circuit addresses - so those will have to be looked up.   I decided early on that the Google Maps JavaScript API would be used to display the A to Z points on a map.  Also if the circuit had both an A and Z location that a line would be drawn between the two points indicating that those two markers were connected.

Because the customer could have several services it was decided early that a map should appear for each service instance and not include circuits associated with a different service instance.

So first I'll introduce the base classes that were used for the initial prototype.

     
function MarkerAddress() {
    this.address = null;
    this.description = null;
    this.marker = null;
    this.drawLine = false;
    this.geoCodeResult = null;
    this.drawnLine = null;
}

function GoogleMapContainer() {
    this.companyMapInstance = null;
    this.serviceObjectId = null;
    this.googleListener = null;
    this.mapElement = null;
}

MarkerAddress will include the given address, a description that should appear on the Google Map marker, the marker object that appears on the map, an indicator that a line should be drawn to the prior marker in an array that will be stored, and the results from calling the Map API's geoLocation API.

GoogleMapContainer will contain the element which the map will appear, the service instance Id from the NIS, the Google listener handle, and an instance of the object that will be doing most of the work of looking up (and storing) the addresses for the circuits on that service instance.

     
// REQUIRES THE underscore.js library to be loaded!
function CompanyMapInstance() {
    this.googleGeoCodeInstance = new google.maps.Geocoder();
    this.googleMapInstance = null;

    // these contain the addresses we passed 
    // along with the extended properties
    // in the geoCode location in GoogleMaps!
    this.addresses = new Array;

    _.bindAll(this, "callBackGeoCode");
}

Finally there is the CompanyMapInstance object. Here is where an instance of the Google Geolocation object and Google Map object is stored. Along with an array of MarkerAddresses located in the addresses array.  You might notice the call to _.bindAll(this, "callBackGeoCode").  I'll talk more about this later.

I won't go much into the details behind creating the instances of the GoogleMapContainer - I'll just say that a new instance will be created for each service instance the customer has on their account.  There's an array that will contain these so the already created GoogleMapContainer can found later as the customer can display/hide each of the service instances on the main page.

When a new service instance is requested for display an $.ajax() call is made back to the server to obtain all the circuit addresses.  The addresses are hydrated as objects and are placed into the CompanyMapInstance.addresses array.  Here's the initial version of the success callback that is invoked within the $.ajax() call.

     
success: function (circuitPoints) {
    var circuitPointList = JSON.parse(circuitPoints);

    if (circuitPointList.length > 0) {
        var containerId = circuitPointList[0].ServiceObjectId;

        // find the map container for this service instance
        var mapContainer = $.grep(googleApis, function (e) { return e.serviceObjectId == containerId; });

        if (mapContainer.length > 0) {

            var aCompanyMapInstance = mapContainer[0].companyMapInstance;

            for (var i in circuitPointList) {
                // normalize the data a bit - TODO - this could be better?
                var aMarkerAddress = new MarkerAddress();
                aMarkerAddress.address = circuitPointList[i].ALocationAddress;
                aMarkerAddress.description = circuitPointList[i].Description;
                aMarkerAddress.drawLine = false;
                aCompanyMapInstance.addresses.push(aMarkerAddress);

                if (circuitPointList[i].ZLocationAddress != null) {
                    aMarkerAddress = new MarkerAddress();
                    aMarkerAddress.address = circuitPointList[i].ZLocationAddress;
                    aMarkerAddress.description = circuitPointList[i].Description;
                    aMarkerAddress.drawLine = true;
                    aCompanyMapInstance.addresses.push(aMarkerAddress);
                }
            }
            
            // i've populated the addresses!!!!
            // now mark the points...and here's why the _.bindAll() is important!!
            aCompanyMapInstance .setMarkers();
        }
    }
}

Once the addresses are populated the CompanyMapInstance method of setMarkers is invoked.  This is displayed below.

     
CompanyMapInstance.prototype.setMarkers = function () {
    for (var i in this.addresses) {
        var address = this.addresses[i].address;
        this.googleGeoCodeInstance.geocode({ 'address': address }, this.callBackGeoCode);
    }
};

So for each address the Google geocode method is invoked to find the lat/long.  The CompanyMapInstance method "callBackGeoCode" is registered as the call back method when the address is found.  So now you might have guessed why the _.bindAll is necessary.  This allows the callBackGeoCode method to access the addresses array stored in the CompanyMapInstance object that invoked the Google geocode method.  This allows, once the correct MarkerAddress has been found, to pull the description which is then set to the marker object, assign the marker object to the MarkerAddress instance, and store off the results geocode method.  So the callBackGeoCode method is defined below.

     
CompanyMapInstance.prototype.callBackGeoCode = function (results, status) {

    var captionName = "A circuit point";

    if (status == google.maps.GeocoderStatus.OK) {
        if (status != google.maps.GeocoderStatus.ZERO_RESULTS) {

            // pull the results...
            var latLong = results[0].geometry.location;
            var geoCoderObjectResult = results[0];

            // center the map on the last circuit.
            this.googleMapInstance.setCenter(latLong);

            // place the marker on the map.
            var marker = new google.maps.Marker({
                position: latLong,
                map: this.googleMapInstance,
                title: captionName
            });

            // now find this address in the array of this.addresses
            var item = this.findAddress(geoCoderObjectResult);

            if (item>=0) {
                // found the address item...
                // save off the marker!
                this.addresses[item].marker = marker;
                // save off the geoCoderObjectResult!
                this.addresses[item].geoCodeResult = geoCoderObjectResult;

                marker.setTitle(this.addresses[item].description);

                if (this.addresses[item].drawLine) {
                    if (item > 0) { // make sure you aren't the first item in the list!
                        var priorAddress = this.addresses[item - 1];

                        if (priorAddress.geoCodeResult) {
                            // only try and draw that line IF you have a geoCodeResult!
                            var pathItem = [latLong, priorAddress.geoCodeResult.geometry.location];

                            if (priorAddress.geoCodeResult) {
                                this.addresses[item].drawnLine = new google.maps.Polyline({
                                    path: pathItem,
                                    geodesic: false,
                                    strokeColor: '#FF0000',
                                    strokeOpacity: 1.0,
                                    strokeWeight: 2,
                                    map: this.googleMapInstance
                                });
                            }
                        }
                    }
                }
            }
        }
    }
};

Mind you there's still plenty of code / testing that needs to place - however, the initial results were quite exciting. And they provided a great opportunity to flex my JavaScript skills.

Thursday, October 23, 2014

Upgrading to AD FS 3.0

I always felt the implementation of AD FS 2.0 was klunky in Windows 2008 and from all appearances was a bolt on.  With the release of 4.5 WIF and AD FS support was built into the .NET framework.  As a bonus part of this upgrade to the .NET framework AD FS was better baked into the Windows 2012 operating system.  Microsoft also made some significant changes to the technology that were impressive and potentially worth the upgrade.

I was finally given an opportunity to upgrade to Windows 2012 R2 and given a chance to upgrade our STS to use AD FS 3.0.  There were a number of positive changes that you can read on other blogs and from Microsoft's web site(s) on the subject of AD FS 3.0.  Here are a few observations I noted during the process of getting AD FS 3.0 working in our testing environment.

First, an AD FS 3.0 (Windows 2012 R2) proxy will not work with a AD FS 2.0 (Windows 2008 R2) service.  It was clear during setup that the new proxy server was able to communicate to the old AD FS 2.0 service, however, it wasn't able to save the new proxy settings.  I would always get the error "Unable to save profile."  I am sure that if I used the PowerShell commands I could have gotten a better error message.  This experiment was enough to request another Windows 2012 R2 server where I could install and configure an AD FS 3.0 service.

Second, before setting up an AD FS 3.0 service (Windows 2012 R2) against a Windows 2008 R2 Active Directory server you have to upgrade the AD data store.  This is documented on Microsoft's TechNet site here. Good news here is that any existing AD FS 2.0 proxy/services will not be affected by this upgrade - you can continue to use them without any issue.  Additionally all the AD management software on your Windows 2008 R2 server will continue to work as expected - likely a given to most but it was something that needed to be tested prior to a production roll out.

Third, an AD FS 3.0 and AD FS 2.0 proxy/service servers can co-exist without any conflict.  However, you can't load balance them or expect them to behave in a cohesive fashion. You must treat them like two different end points for RP's to send the login requests.  This would be helpful if you wanted to roll out the new proxy/service servers without affecting any existing RP's.  We took advantage of it by slowly migrating existing RP's to the new servers.  Any new RP's would be automatically using the new proxy/service servers.

Forth, the wizard to set up the AD FS 3.0 service server didn't work for me.  And it isn't clear to me how it would for anyone based upon the PowerShell script it creates while in the wizard.  I had a couple of stumbling blocks that showed up during my experience.  I first needed a certificate that matched the domain in which I was installing the AD FS service.  While I understand using a certificate issued for the same domain is a normal case scenario.  Our AD FS 2.0 instance in the test environment was setup with a certificate issued by a different domain.  I  created and installed a temporary certificate to get past the first set of error(s).  The Install-ADFSFarm commandlet in PowerShell requires the name and credentials of a services account that will be used for the running setup process as well as for access to any MS SQL instance you will be using.  Well those credentials weren't in the PowerShell script nor was there a prompt to ask for them.  When the wizard executed you'd never get prompted for the user / password (this despite the need to provide one during the wizard setup process!).  Using the setup wizard doesn't give you very good (any) error messages that would help you complete the setup if there was an issue.  A great deal of time would have been solved by using the PowerShell environment to begin with.  The certificate problem and the missing credentials were all errors being suppressed by the wizard - it only reported "An error has occurred" with each unsuccessful attempt.  Take my advice and skip the wizard for the creation of the first node in the AD FS Farm and use the PowerShell command.

Fifth, when installing the AD FS 3.0 service do not upgrade your AD FS 2.0 MSSQL database.  Doing this will effectively leave your AD FS 2.0 installation in a non-functional state.  Use a different MSSQL server (or instance).  Or if in a test environment use the internal MSSQL instance running on the server you are installing the AD FS 3.0 server.  There is adequate warning for this - well at least there is you are paying attention.  The installation process will tell you that it found an existing ADFS data store and that it will be overwritten during this process.  "Overwritten" should be the give away that if you continue you won't be using your AD FS 2.0 proxy/server anymore.

And finally, the new login screen in AD FS 3.0 prevents most customization.  It allows for some basic changes which you can read about here.  However, overriding the behaviors within the onload.js file, adding any Javascript libraries, adjusting the login page's HTML from within the onload.js file are an "at your own risk affair" and Microsoft will not will provide any support.  This is of course expected - but I found it entertaining when on Microsoft's own site they showed you how to override the default behavior to allow someone to only enter a user name on the login page.  I understand why this was done, but I also found it irritating as it would take a bit of effort to provide the same functionality/customization that was present in our AD FS 2.0 login page.  And trying to make a page look good as a result of adding new elements in Javascript is difficult for the best of web developers.

Thursday, May 8, 2014

Speeding Up Slow Processes

The Problem
I recently upgraded portions of a consumer portal identity management system. Part of this upgrade included the caching of charge records from the billing system which runs on an IBMi system. The API into this mainframe system is slow and what's worse is the mainframe interactive mode is turned off during the nightly processing which includes processing service orders, running reports, and finally execute any billing processes (the environment is turned over to batch mode).  This essentially means that the API is down until the interactive mode is active.  The latter problem is primarily why the charge records of the billing system needed to be cached in a database that would be available 24x7 for the identity manager and related functions on the consumer portal. These billing elements are used to control SAML assertions that are sent to other relying parties (or service providers). These also control and define behaviors on a page that is used to display what applications are available to the customer (e.g. phone manager, email manager, etc.). Links are also displayed on this page for a number of 3rd party applications, that are not SAML compliant and require a place where identity information can be shared using various custom security schemes.

This is the second customer portal identity management system that I've worked on for this company. In the first system when large changes like this are made or if there have been significant problems with the interface that provides updates on the customer information (think CRUD operations) an application called "AccountSync" is executed. The idea is that AccountSync will update the identity management system, and the related data that is cached, with current data from the billing system. Since this system is only eight months old there had not been a need to run the AccountSync against the IBMi billing system. There were still a few weeks before this information would be needed - but early deployment of portions that would populate the cached data seemed a reasonable way of making sure all was in order when the final changes were deployed to the live system. So the database was upgraded and the data contracts for the web services were updated with the new structures and everything was working great - data was being populated as the system was notified of service orders being completed in the billing system. In addition to these changes a small change to the AccountSync application was required as the mainframe would be placed in batch mode during the nightly processing between 1900 and 0400. Once those changes were made the application was run.

And run it did...for a solid week. There are only ~40,000 customers! A solid week! The first system has ~66,000 customers and AccountSync would take only ~2.5 days. Clearly something must be done to increase the efficiency of this process especially when the long term goal is to migrate all customers into one consumer portal.

The First Solution
The first thing I considered was running several different copies of AccountSync and splitting up the work between them via returning only a small subset of customers, retrieved from the database, for each process to work independently.  While an easy and cheap solution to the problem it lacked the ability to run without being re-balanced by hand with each execution.  And frankly the 'solution' was a kludge.

The Second (and final) Solution
After discarding the first idea I started to investigate the parallel programming additions made in .NET 4.5 and was quickly rewarded when I came across the TPL Dataflow.  This isn't something that comes standard with .NET 4.5, rather it is library that you can install via NuGet within Visual Studio.

Coding a multi-threaded application was extremely easy with this library.  I first defined a BufferBlock which contained the list of customers that needed to be processed.
private static readonly BufferBlock CustomerBufferBlock = new BufferBlock();
I then loaded up this buffer with the list of customers. The activeCustomers in this example is a list of integers, customer Id's, that were obtained from database.
activeCustomers.ForEach(customerId => CustomerBufferBlock.SendAsync(customerId));
An ActionBlock needs to be defined.  This is kind of tricky - as within the action block you need to tell it what method and what the parameters to that method are and optionally (I actually recommend it) you should tell the ActionBlock how many threads to execute.  I set up the AccountSync to run the number of threads defined in the configuration file - essentially this could be throttled up or down depending upon need and performance.
// initiation the ActionBlock
ActionBlock customerActions = new ActionBlock(s => ProcessCustomer(s, dao, sleepingHour, restartHour),
        new ExecutionDataflowBlockOptions() { MaxDegreeOfParallelism = threadCount });
The method, ProcessCustomer needs to be defined in a certain fashion in order for the ActionBlock to get notified of completion events.  You will notice that I set within the parameters when the thread should sleep and when it should wake-up.  This was necessary so that each thread could be forced to sleep for a period of time while the IBMi was placed in batch mode and then wake up when interactive mode was resumed the next morning.
private static async Task ProcessCustomer(long customerId, IPortalDBDao dao, int sleepingHour, int reStartHour)
Finally I needed to link the BufferBlock to the ActionBlock and begin executing the Tasks.
// Link BufferBlock to ActionBlock
CustomerBufferBlock.LinkTo(customerActions);
CustomerBufferBlock.Completion.ContinueWith(task => customerActions.Complete());

// Tell the buffer block i'm done populating it with data - it appears you 
// can continue to load this process as needed as it runs.
CustomerBufferBlock.Complete();

// Now wait until all the items in the buffer block have been execution by the action.
customerActions.Completion.Wait();
Once I made these small changes I ran through the process a few times in single threaded mode to ensure that the defined Task would behave with the parameters. Then I slowly ratcheted up the number of threads until I started to get a diminishing return in performance.  I found that locally running five threads performed about 60% better than running single threaded.

Conclusion
When everything seemed to be working and I found the sweet spot between number of threads and performance I ran this in production.  And it ran...it finished the job in just under 28 hours.  This was done without really pushing the processing power of the systems involved - next time this is executed I'll try adjusting the threads to see if I can get this process to run even faster.  But I am more than pleased with the results the first time around.

Tuesday, April 1, 2014

Java/.NET encryption

Where I work we have what I would call a Ferarri of security and single sign on technology - we are using ADFS as our SSO technology.  For the most part we've been successful in getting vendor to comply and partner with us to ensure that our customers experiences are secure as possible.  With vendors (SP) who use .NET we encourage them to use WS-Federation, while those using JAVA or PHP we suggest using SAML 2.0 to interface with our ADFS IdM.  Then there's a third classification of vendor - those who won't, or those who simply can't get it together enough to hook up to our IdM.  There's an even more of a special category of vendor who can't really understand why doing a HTTPS GET with a base-64 encoded string that contains your customer's login ID and password is called "completely unacceptable" during a conference bridge.  Well sadly enough I had to work such a vendor recently - I even attempted to make the process more secure by rigging up a handshake that could be easy to implement and secure enough for the customer.

This handshake was rather easy - really.  We'd take care of performing the authentication and initial authorization steps of the login process.  In other words we'd ensure that the customer provides a good user name and password and that they have the product which would give them some entitlement to enter the vendor supplied web site.  In pseudo code these were the steps this handshake needed to take.
  • Format a plain text string with the following elements: &CustId=xxxx&TimeStamp=<Julian date/time value>.  The CustId element was taken from the assertions being sent - while the time stamp was used as a random filler for the encryption process.
  • Take plan text and encrypt the string using a shared certificate.
  • Convert the binary encrypted data into a Base-64 string.
  • Sanitize the Base-64 string so that it could be passed along as parameter to the vendor web site.
To obtain necessary information needed by the vendor (e.g. the CustId) the steps above needed to be reversed.  The value after "&CustId=" could then be used to look up the customer and provide them with the tools necessary to perform the actions they wish.

Our technology stack is .NET - the vendors is Java.  I haven't been in the Java space in over a decade at this point.  But I was pretty certain that whatever encrypted string I could come up could easily be processed on the Java side by a developer.

Here's the code to build the handshake data.
public static string EncryptQueryString(string handshakeName, string plainStringToEncrypt)  
     {  
       string digitalCertificateName   
         = CommonConfigurationManager.GetNonSsoHandshakeEncryptCert(handshakeName);  
       X509Store store = new X509Store(StoreName.My, StoreLocation.LocalMachine);  
       StorePermission sp = new StorePermission(PermissionState.Unrestricted);  
       sp.Flags = StorePermissionFlags.OpenStore;  
       sp.Assert();  
       X509Certificate2 certX5092 = null;  
       store.Open(OpenFlags.IncludeArchived);  
       if (digitalCertificateName.Length > 0)  
       {  
         foreach (X509Certificate2 cert in store.Certificates)  
         {  
           if (cert.SubjectName.Name != null &&   
             cert.SubjectName.Name.Contains(digitalCertificateName))  
           {  
             certX5092 = cert;  
             break;  
           }  
         }  
         if (certX5092 == null)  
         {  
           throw new Exception("No Certificate could be found in name " + digitalCertificateName);  
         }  
       }  
       else  
       {  
         certX5092 = store.Certificates[0];  
       }  
       string plainString = plainStringToEncrypt.Trim();  
       byte[] cipherbytes = Encoding.UTF8.GetBytes(plainString);  
       RSACryptoServiceProvider rsa = (RSACryptoServiceProvider)certX5092.PublicKey.Key;  
       byte[] cipher = rsa.Encrypt(cipherbytes, false);  
       string cipherText = Convert.ToBase64String(cipher);  
       return cipherText;  
     }  

So there's a great deal going on here...but essentially I'll find a certificate of the name found in the config file for this handshake interface.  When found in the certificate store I'll then use it to encrypt the plain text string to an array of bytes, which will be converted to a Base-64 string.  Code further up the chain will fix the Base-64 string to ensure that characters are properly changed to HTML escape characters if necessary.

I then got my initial feedback from the vendor.  And it wasn't encouraging - it indicated that the vendor didn't really understand how this all worked and lacked the basic skills necessary to pull this off.  I can only imagine what would have happened if we forced them into SAML 2.0 - well I can and when it does happen it won't be pretty.  After a few back and forths and some hints I gave up and wrote the Java side of this for them.  Which reverses the process so we can get the "CustId" from the encrypted string.

public static void main(String[] args) {  
           // TODO Auto-generated method stub  
           String uriString = "a%2bbO7VR6tQ4EJd5voT5gWxdUC4ELTQR5dvzDL4a8f71AQYK2TTW%2bvzbg%2fjoBqlAMrS9HfAYTWMfVuBSLY4Lk0ksWKg249cprtbxzlOFaPL19M9GQSrQLwwKNfx8MWyuiI2YZlgteM8cJBPIq5y9S2JU40jx38fEULEBTp83%2fsdylcUTSx9sGO1MWGRkW2ux2a6FiP7xcIYiqldAwnx8KRasCL3iC7jKb8oyPj9g3r7gHzs1jOLpwUahNC4%2fR9skXfQBqSDV3CuApuXCoUIKi6QNOlHp6iLjkpuz1vNAZJgQn4U0hlmZ%2bJDW3sl7v09VdPlTjQgIrlKroYNH1csBmng%3d%3d";  
           String uriString1 = "N7hoRdMeapijdzo4z35lcOdT%2b4x7cMk7LvkJMLClax5YCPsqcopeyGIyZKlb0NZ%2baAjEtTq%2fSgc43QvXarx9YX1GWxXmrCyZykSirZOpKdRiMp%2fswNLRsYaPyIj4UBZAmvMoDm%2bW3fXX%2bwslGJMPg10AjXGHC7O6G8%2f64yO7zCBF3j5i8HI7lEhlMtIfE7%2fn%2bm5dhwkC9ZEtEpZMOiFlqnqT2OIhzzMySzyNFk6Y1lUWrjk7S6%2bL1a7Gihr%2fYjjPX9Pt6RPQ9gCo4oFbfNjtqbPYHttEiJerVCNm6eYP4AiU%2f0c54YAA2DfRxDhuQW%2bq%2b%2flS83RahAy4JyRJVy%2b5ug%3d%3d";  
           String uriString2 = "opbLFa5UpBEK1wDtVNZ0j7srfqx447fMTAZThTL4Cr4xWYrzIpJv9qrAy3yKG73Lt7fUGWk7q%2foiy0f2r2ZjexI8lKvnAbrtD6URK3G1NFswY6PsH99YrsKdz2%2f2qvvbWfjlntqFrOitIS3Ndyt2PPBVmLCiWFSDy%2fxjE%2bL3XYo2VdEWAL%2fjpyzHhfxC1C84nytAINXDoECKVeU6n2zCg9%2bKAeM4keNpawRtFJXlB4nYj8sUayQy4LedfZk5JR%2bBMq5HWED6QCuGoAbZ7D9ablIMY%2bqKfZOd5zjSFN57qsM7Lgozdu5F80bxKDqrvR3C7GFK19YInxlEKy%2fpo8mmNg%3d%3d";  
           String unEncodeString = "";  
           final String PRIVATE_KEYFILE = "private_key.der";  
           final String CERT_FILE = "binary_cert.cer";  
           try {  
                unEncodeString = unEncodeUri(uriString2);  
                byte[] encryptedStuff = Base64.decodeBase64(unEncodeString);  
                PrivateKey privateKey = getPrivateKey(PRIVATE_KEYFILE);  
                InputStream inStream = new FileInputStream(CERT_FILE);  
                CertificateFactory cf = CertificateFactory.getInstance("X.509");  
                X509Certificate cert = (X509Certificate)cf.generateCertificate(inStream);  
                Cipher cipher = Cipher.getInstance("RSA/ECB/PKCS1Padding", "SunJCE");  
                cipher.init(Cipher.DECRYPT_MODE, privateKey);  
                byte[] unenCryptedData = cipher.doFinal(encryptedStuff);  
                System.out.println(new String(unenCryptedData));  
           } catch (UnsupportedEncodingException | FileNotFoundException | CertificateException | NoSuchAlgorithmException | NoSuchPaddingException e) {  
                e.printStackTrace();  
           } catch (InvalidKeyException e) {  
                e.printStackTrace();  
           } catch(BadPaddingException e) {  
                e.printStackTrace();                 
           } catch (IllegalBlockSizeException e) {  
                e.printStackTrace();  
           } catch (NoSuchProviderException e) {  
                e.printStackTrace();  
           }  
           System.out.println(unEncodeString);  
      }  
      private static PrivateKey getPrivateKey(String privateKeyFile)  
      {  
           try {  
                RandomAccessFile raf = new RandomAccessFile(privateKeyFile, "r");  
                byte[] buff = new byte[(int)raf.length()];  
                raf.readFully(buff);  
                raf.close();  
                PKCS8EncodedKeySpec kspec = new PKCS8EncodedKeySpec(buff);  
                KeyFactory kf;  
                kf = KeyFactory.getInstance("RSA");  
                return kf.generatePrivate(kspec);  
           } catch (IOException | NoSuchAlgorithmException | InvalidKeySpecException e) {  
                // TODO Auto-generated catch block  
                e.printStackTrace();  
           }  
           return null;   
      }  
      private static String unEncodeUri(String uriString) throws UnsupportedEncodingException  
      {  
           URLDecoder mydecoder = new URLDecoder();  
           return mydecoder.decode(uriString);  
      }  

There are a couple of tricky parts here.  First the Cipher instance MUST be obtained with the parameter above ("RSA/ECB/PKCS1Padding").  Not doing this will cause you to get an error because of the padding that .NET adds to the encrypted bytes.  Second, you must read in a private key file and obtain the private key in order to decrypt the string.  The private key file can be generated from certificate provided via the tool OpenSSL.  The best command I found for this is below:

openssl pkcs8 -topk8 -nocrypt -outform DER privatekeyfile.key host.pk8

Prior to decrypting and pulling out the results needed the Base-64 string must be converted to back to a non-HTML safe Base-64 string.  Then it must be converted into an array of bytes from the Base-64 string.  This is accomplished in the method unEncodeUri and by calling the Apache common method of Base64.decodeBase64(unEncodeString) which will do the job well.  Once accomplished decrypted the string is rather easy.  Finally in the line System.out.println(new String(unenCryptedData)) you can have the encrypted data displayed on the console.