Thursday, July 19, 2018

Contact Me Page for a S3 Hosted Website Part 1

In an effort to branch out and increase my knowledge of cloud computing I recently took on a couple of contracts for redoing the websites of a couple of local businesses.  The existing sites weren't bad necessarily - just aged and in need of some content, style, and SEO related updated.  Additionally the hosting fees for these sites were pretty large considering these were simple 'pamphlet style' sites.

The goals of the new websites were pretty simple - reduce hosting costs and increase SEO visibility.

While the latter is a level of alchemy unknown to most and there are simply better qualified people to discuss this topic I'll avoid it here.  It should be sufficient to state that I did a bunch of research, found a reasonably priced expert, and took some ideas from SEO efforts that were being taken by my employer.  I mixed these ideas together to produce something that kept these businesses on page one for most of their search terms.  Additionally their businesses typically landed on the first three listings on Google Maps when searching for "<insert relevant term> near me".

The former is what I was able to focus on - and I felt the most reasonable priced option was going to be Amazon Web Services (AWS).  When discussing the necessary features with each of the business owners there were some common requirements which are summarized below.

  • Clean updated look
  • No specific need to update the site frequently
  • Needed better tracking via Google Analytics, including the ability to track events
  • Need a "contact me" form.  In one case there would be several on different landing pages.  And in another case the ability to attach several images was required.
It's that last bullet point that really caused me some initial alarm - "how am I supposed to handle a form post on a static S3 web site"?  After some initial searching I narrowed down two potential ways a form post could be handled on an S3 hosted web site.

Post to S3 Bucket

Not a great option and fraught with problems

Amazon has a great write up on how to you can perform a form post and have a file (just the file) stored in an S3 bucket here.  If you haven't had any exposure to the AWS security model or really any experience with AWS the instructions are bit obscure and implementation can be problematic.  However, once implemented the first time it becomes easier to setup someplace else.

So what's the problem with going this direction?

First, the learning curve for setup.  One of the main problems I found trying to get the policy and signature settings correct highly troublesome.  There is a nice tool for this that I found here, which helped alleviate the setup.  However, this quickly become cumbersome as the requirements for more fields in the contact page were added.  Essentially having to run this with iteration became quite torturous exercise.  Additionally, you have a built-in expiration date in the X-Amz-Date element in the form.  I've seen some people generate the policy and signature settings on the fly right after a page load to avoid this expiration issue.

Second, while this worked as expected we were limited to the type of data.  Sure this allowed me to  drop an image file(and only one!) in the S3 bucket, but now I needed another post (typically via an .ajax() call) to send the rest of the contact data (like name, email, etc.) someplace.  An added bonus - the contact information needed to be sent prior to posting the image file - as illustrated by the "success_action_redirect" element in the form and the behavior that occurs after the form post.

Finally, there was a great deal of extra code on the back-end to deal with having the contact information before having the image file.  After all, what's the point of sending the new prospect to the company if you can't attach the image to the email?

None of these could be considered show stoppers.  But really going this direction seemed 'kludgey' and felt very unnatural.  Additionally I felt this approach introduced a heavy support burden which I didn't want to deal with - after all I have day job.  However, what finally killed this approach was I simply couldn't figure out how to post more than one image file - I'm sure there's a way, but I quickly lost interest in this approach given other constraints outlined above.

Gateway API/Lambda


Felt more natural...easier to implement...faster learning curve

If you haven't been introduced to, heard of, or used the Serverless Framework I suggest you stop reading now and go learn this tool.  It is, in my mind, a game changer.  I had been struggling with how to use the Gateway API/Lambda combination for some time when I came across the Serverless Framework.  And while I got 'something' working without it I was still weeks away from getting a proof of concept off the ground with this contact form.

There are some pitfalls with this approach - so don't think that this solution will work in all cases. When sending files through Gateway API there are two limitations.  First, you can't send the raw file(s).  You must base64 encode the file(s) prior to sending them.  To do this first paste this method in your javascript code.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
    function getBase64(file, onLoadCallback) {

        return new Promise(function (resolve, reject) {
            if (file) {
                var reader = new FileReader();
                reader.onload = function () {
                    resolve(reader.result);
                };
                reader.onerror = reject;
                reader.readAsDataURL(file);
            }
        });
    };

Second, prior to posting the form to Gateway API you have intercept the file(s) on the form and transfer them into a base64 string(s).  This is how I implemented it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
                var promises = new Array();

                for(var i = 0; i < fileCount; i++) {
                    var image_file = document.querySelector("#image_file").files[i];
                    promises.push(getBase64(image_file));
                }

                Promise.all(promises).then( function (imageDatas) {

                    for (var i = 0; i < imageDatas.length; i++) {
                        filesize += imageDatas[i].length;

                        if (filesize > 0 &&
                            filesize < 6e6) {

                            $('<input>').attr({
                                type: 'hidden',
                                id: 'image-file-' + i,
                                name: 'image-file-' + i,
                                value : imageDatas[i]
                            }).appendTo($(form));
                        }
                    }

                    postForm($(form).serializeArray());
                });
So here I am doing a couple of things.  First, doing the conversion to base64 will return a promise.  Second I need to wait for all the promises to return.  Finally I will then create a new element on the form with the base64 encoded string placed in the value for each file added to the form.

So what's with this code?

1
2
3
4
                        filesize += imageDatas[i].length;

                        if (filesize > 0 &&
                            filesize < 6e6) {

Yea there is a gotcha here...the payload being send to Gateway API cannot exceed 6GB - which for our purposes was more than sufficient. If you need to send something larger that 6GB?  Your S3 form post approach is more suited for your needs.

So that covers my initial problem was how to send information from a website hosted by S3 to a back-end system.  You can review my full implementation here to see all the code for html/js files.  My next post will go more into the back-end code that captures this data and then emails the business their new lead information.

Tuesday, March 21, 2017

D3.js - The search for the working responsive demo.

Had some fun recently where I needed to add some graphing to a web page.  The purpose was to give customers an idea of how much and when they were consuming data and how much of their bandwidth was being utilized.  The idea was to help them understand their patterns so that they could better understand why their internet was slowing down.  Coupled with the knowledge of what devices were on their network during these times provided a nice debugging and education tool for our customers and technical support team.

But during the development I found maze of twisty passages on how to use the D3 graphics library and how to use that library within a responsive web page.  Most of it sadly either didn't work (old version of D3) or was badly documented so that it was essentially unusable.  What I came up with likely won't win awards for good code design.  But I hope to provide an easier approach to this problem that hopefully someone stumbles across.  For this demonstration I used a simple bar chart that would raise up from the bottom of the <svg> element.  It used a simple array of numbers as its data source which was stored in a JSON file on the web server (and could be easily replaced with REST call to obtain the same data!).

Really there are only two major components.  The first is to setup of a number of helper methods that obtain the height and width of the element containing the <svg> element where your graphic has been placed.  I ended up with eight methods - getBarHeight, getBarWidth, getXPosition, getYPosition, getTextXPosition, getTextYPosition, getHeight, getWidth.  Most of these should be understandable.  Below is a full listing of these methods.  Earlier in the javascript I defined a variable called containerElement which stored the element in which the <svg> was contained.  Additionally another variable - dataItems - was populated with my data set - in this case it was a simple array of numbers.  However, you can substitute an array of objects just as easily.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function getBarHeight(dataItem) {
        return containerElement.clientHeight * (dataItem/100);
    }

    function getBarWidth() {
        return (containerElement.clientWidth / dataItems.length) - padding;
    }

    function getXPosition(dataItemPosition) {
        return dataItemPosition * ( containerElement.clientWidth / dataItems.length) + padding;
    }

    function getYPosition(dataItem) {
        return containerElement.clientHeight - getBarHeight(dataItem);
    }

    function getTextXPosition(dataIemPostion) {
        return getBarWidth()/2 + (dataIemPostion * ( containerElement.clientWidth / dataItems.length) - padding);
    }

    function getTextYPosition(dataItem) {
        return containerElement.clientHeight - getBarHeight(dataItem, containerElement.clientHeight) - padding;
    }

    function colorPicker(dataItem) {
        if ( dataItem &gt;= containerElement.clientHeight/2 ) {
            return "red";
        }
        return "blue";
    }

    function setWidth() {
        return containerElement.clientWidth;
    }

    function setHeight() {
        return containerElement.clientHeight;
    }

Essentially the magic was to find the clientHeight/clientWidth of the container of the <svg> element and perform some simple math to determine the width, height and relative positions for the bar chart items.

The second piece to this mystery was actually registering a resize method to the widow resize event to a method that would, this in case, redraw the bar graph after when the event fired.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
var BarChart = (function(window, d3) {

    /* TODO - set for max/min height/width*/

    var dataItems = [];
    var padding = 3;
    var svgElement = [];
    var containerElement = [];

    function createBarGraph(elementName, svgName, urlAction) {

        containerElement = $("#" + elementName)[0];

        svg = d3.select("#" + elementName)
            .append("svg")
            .attr("id", svgName)
            .attr("width", setWidth())
            .attr("height", setHeight());

        svgElement = $("#" + svgName);

        d3.json(urlAction, initData);

        d3.select(window).on('resize', resize);
    }

    function getBarHeight(dataItem) {
        return containerElement.clientHeight * (dataItem/100);
    }

    function getBarWidth() {
        return (containerElement.clientWidth / dataItems.length) - padding;
    }

    function getXPosition(dataItemPosition) {
        return dataItemPosition * ( containerElement.clientWidth / dataItems.length) + padding;
    }

    function getYPosition(dataItem) {
        return containerElement.clientHeight - getBarHeight(dataItem);
    }

    function getTextXPosition(dataIemPostion) {
        return getBarWidth()/2 + (dataIemPostion * ( containerElement.clientWidth / dataItems.length) - padding);
    }

    function getTextYPosition(dataItem) {
        return containerElement.clientHeight - getBarHeight(dataItem, containerElement.clientHeight) - padding;
    }

    function colorPicker(dataItem) {
        if ( dataItem &gt;= containerElement.clientHeight/2 ) {
            return "red";
        }
        return "blue";
    }

    function setWidth() {
        return containerElement.clientWidth;
    }

    function setHeight() {
        return containerElement.clientHeight;
    }

    function setGraph() {
        svg.selectAll("rect")
            .data(dataItems)
            .enter()
            .append("rect")
            .attr( "x", function(d, i) { return getXPosition(i); })
            .attr( "y", function(d, i) { return getYPosition(d); })
            .attr( "width", function(d, i) { return getBarWidth()})
            .attr( "height", function(d, i) { return getBarHeight(d); })
            .attr( "fill",function(d, i) { return colorPicker(getBarHeight(d)); })
        ;

        svg.selectAll("text")
            .data(dataItems)
            .enter()
            .append("text")
            .text( function(d) { return d;})
            .attr( "x", function(d, i) { return getTextXPosition(i) })
            .attr( "y", function(d, i) { return getTextYPosition(d) })
        ;
    }

    function initData(data) {
        dataItems = data;
        setGraph();
    }

    function resize() {
        svg.selectAll("rect").remove();
        svg.selectAll("text").remove();
        svgElement.attr("width", setWidth()).attr("height", setHeight());
        setGraph();
    }

    return {
        resize : resize,
        createBarGraph : createBarGraph
    }

})(window, d3);

I encapsulated all the behavior into an object I called BarChart and placed it in a separate .JS file to reuse.  The full listing is below.  There are only two public methods - resize and createBarGraph.  And honestly only the createBarGraph is necessary as the resize event handler is defined within the object during initialization.  The BarChart object takes three parameters in the createBarGraph method.  The first is elemementName, which is the Id attribute given to the element which will contain the <svg> element.  The second is the svgName, which will be used as the Id attribute when the <svg> element is created.  And finally the urlAction which is where the data for the graph can be obtained.  I'll admit the parameter names are a bit wonky - but since you have access to the source code you are welcome to change them.

Thursday, September 3, 2015

Entity Framework - Specify Different Connection String Programmatically

I was pretty sure that I wasn't the first person to run into this problem - being able to specify a different connectionString entry for use with my entity framework DbContext class.  Most of the time having the one entry just works and all you need to worry about is adjusting the database location, username, password and sometimes the catalog as your application migrates from development, QA, and then (hopefully) into production.

But what happens if you are needing to use your EF DbContext class on to two different data sources in the same application?

Why I had to do this was I needed quickly get the QA environment ready for testing an upgrade by copying the entire content of Production.  This environment refresh included database as well as LDAP content (minus passwords) that needed to be in sync in order for the QA environment to function properly.  

I was left speechless when I realized that my entity framework constructor didn't, initially, allow me to override the name of the entry in the connectString section of the app.config file.  I was even more surprised at the numerous creative ways developers managed to get around this as illustrated by searching StackOverflow.  After reading over several different versions spanning the past few years I became overwhelmed and decided that, after lunch, I'd find another way to do what I wanted to do.  After all I surmised this process was going to be used by a highly technical and detail oriented staff and we could easily run a couple of different processes to get the QA environment refreshed.

But after lunch I had another thought and opened up the model's context class and looked at the constructor.
public SampleEntities()
            : base("name=SampleEntities")
        {
        }

My first thought was, it really couldn't be that easy.  Could it?  "SampleEntities" was the name of the connection string in my app.config file.  On hunch I added another constructor.  This time allowing a different name to be specified in the constructor.
public SampleEntities(string connectionName)
            : base("name=" + connectionName)
        {
        }

Once that hard work was completed I went over and tested it out.
using (SampleEntities entities = new SampleEntities("DestinationDatabase"))
            {
                var customers = entities.Customers.FirstOrDefault();
                Console.WriteLine(customers.EmailAddress);
            }

It worked.  The key here is that the name provided in the constructor must match the name of entry in the connectionStrings section in your app/web.config file.  I was speechless, amused, and extremely happy this worked.

Wednesday, July 15, 2015

JQuery Validation .valid() Lies....

I had the opportunity to develop an account management page that replaced an old ASP.NET/WebForms application. It was decided to make this application considerably more user friendly and more responsive - effectively we wanted to give it a good technology update.

The web designer did a great job putting together the new UI/UX - lots of modal pop-ups that allowed customers to change their settings and partial page refreshes to display those changes as they took place.

One key aspect that needed to be retained was the validation rules that prevented duplicate user names as well as duplicate email addresses from being saved in the database. These validations were to be handled consistently across the application so they were added as remote validations within the model class as illustrated below.
[Required(ErrorMessage = "Email Address is required.")]
[StringLength(100, ErrorMessage = "Email Address must be between 5 and 100 characters in length.",MinimumLength = 5)]
[RegularExpression( @"<trimmed for brevity", ErrorMessage = "Please enter a valid Email Address.")]
[Display(Name = "Email Address")]
[Remote("CheckEmail", "Home", AdditionalFields = "UserName", ErrorMessage = "This email has already been registered.  Please enter a different Email Address")]
[DataType(DataType.EmailAddress)]
public string EmailAddress { get; set; }

When using the [Remote] attribute JQuery Validation will, during the validation process, fire off an $.ajax() call to the controller/method you indicate. The method should be defined similar to that below:
public JsonResult CheckEmail(string emailAddress, string userName)
{
   try
      {
         if (this.repository.GetAccountEmail(userName).ToLower().Equals(emailAddress.ToLower()))
         {
            // this is fine, use it
            return this.Json(true, JsonRequestBehavior.AllowGet);
         }

         if (this.repository.IsEmailRegistered(emailAddress))
         {
            return this.Json(false, JsonRequestBehavior.AllowGet);
         }
      }
      catch (Exception exception)
      {
         this.logging.Error("Home/IsEmailAddressRegistered", exception);
         // don't throw...otherwise we'll send over default error page back to the .ajax call.
      }
   return this.Json(true, JsonRequestBehavior.AllowGet);
}

Basically a question is being asked - and a TRUE/FALSE response is required for the validation to fire correctly.  In this instance you'll notice that I attached the UserName in the call - this was because the email belonging to the user being edited was of course valid and exempt from the duplicate email rule.

In most cases the validation from this controller method will return an answer well before the user attempts to submit any changes back the server (email is second field on the add form, while first on the edit form).  However, there remains a slight problem with the JQuery $.validator object. When it invokes this remote validation on the server it won't wait for the return before returning a result when $(form).valid() is called.  It will provide a dirty answer - in other words, it will lie to you.  And end users will find these holes by simply entering an email and then clicking the button that fires off the save.

According to the project members of $.validate project on GitHub this isn't a bug, but a feature. While I agree with the logic presented in the ticket, I couldn't seem to find a solution within the validation documentation that would submit the data on the form to the server after all the validations have returned. In fact the JQuery validation documentation references that the submitHandler event handler should be the place where you do an $.ajax() form post after its been validated. Problem here is that submitHandler is invoked even when there are still pending requests in $.validator object. It seems no matter you do .valid() will return invalid responses until all the pending requests have been completed.

To combat this I found some code that will continue to process until two conditions are satisfied.

  • $.validator.pendingRequest counter must be zero 
  • .valid() method returns true.

The method below will get a handle to the form's $.validator() object. It then examines the pendingRequest value, if it is not zero, the function will exit. If it is zero, all validation $.ajax() requests have returned and its now safe to check the valid() method.  If it returns true we can safely post the data to the server.
function waitForAddFormValidation() {
   var validator = $('#addAccountForm').validate();

   if (validator.pendingRequest === 0) {
      clearInterval(interval);
      if ($('#addAccountForm').valid()) {
         // push your data or form submit
      }
   }
 }
You'll notice above that I tell the browser to stop calling the "waitForAddFormValidation" once the pendingRequest is set to zero by calling the "clearInterval" method.

Then during the button click event setup the method above to be fired at precise intervals.  This is illustrated below.
// Fired from the "save" button on the add account modal
 $("#addAcctAddBtn").click(function(event) {
    if (!$('#addAccountForm').data('changed')) {
        $('#addAccountModal').modal('hide');
            return;
        };

        $('#addAccountForm').valid();

        interval = setInterval(waitForAddFormValidation, 30);
});

In this instance I am first telling the form to validate - which will fire all the validations, including our troublesome $.ajax() call to Home/CheckEmail. Then I instruct the browser to invoke "waitForAddFormValidation" every 30 milliseconds until I tell it to stop.

In going this route I felt like I was abusing the $.validator object a bit - but frankly this worked well and I couldn't find any better idea to get around this problem.

Tuesday, April 21, 2015

Microsoft MVC - Fun with Views

Recently we developed, as part of an overall re-skinning/re-factoring project, a menu service. The purpose of this menu service was to allow all our MVC applications obtain a list of links of other applications that the current user had access. For instance - if the user had a video product we'd provide the link to the DVR manager application. The idea behind this service is that it would be called asynchronously during the application load - after the user had supplied their credentials - and then dynamically adjust the hamburger menu image on the top of the page. Once the service was developed and unit tested I had the opportunity to wire it up to a template project to see how it would be implemented across all our MVC applications.

The Menu Service

The call to the menu service was rather simple. The call was a simple Get to a URL. The pattern of the URL followed this format - https://services.domain.com/MenuService/api/Menu/. Upon success the menu service would return the simple model illustrated below.
    public class MenuItem
    {
        #region Public Properties

        public List Groups { get; set; }

        public string IconImageText { get; set; }

        public string LinkUrl { get; set; }

        public string Name { get; set; }

        #endregion
    }
So as not to get into too much the details here - but basically I'd get a link, an image name that would be displayed on the UI, and a normal name which to display within an A element embedded in a LI element. Upon receipt of the data from the service the following javascript would then be invoked to build the hamburger menu.
     
$.each(data, function (key, value) {
    $("#inlinenavigation").append("
  • " + value.name + "
  • "); });
    Really nothing dramatic or even that exciting here.

    User? What User?

    So you may have noticed the URL pattern above required that it be supplied with a user name - preferably the user name of the person that logged into the site. The question then was so how best to accomplish this? Mind you this call was going to happen on the client in their web browser.

    The first idea was to simply add the user name into the model being passed in the view by the controller. This had some very time consuming implications. This would require that EVERY controller method return a model (we have a few that don't) and that each model come with a "built in" UserName property. Additionally this new property would have to be populated EVERYTIME. And that this property would have to added to each view as a hidden field. Finally every application would have to be retested to make sure the UserName property was populated and that menu service was called properly. This was removed from consideration because of the considerable weight of the code changes and testing that would need to take place.

    The second idea was to create a variable in the ViewBag. This seemed easy enough. With each controller's constructor method (or the constructor of its parent class) fetch the user name. But wait - the constructor doesn't allow the [Authorize] attribute. So maybe move it into the methods that return a view? Sure that might work. However, there are few problems with this approach. First - this would be something that would need to be added on EVERY method call (except the constructor) in the controller. Second - our development follows a specific pattern where the core of the application is developed first (behaviors, models, views, and simply getting it work). Final design tweaks are made my the web designer to ensure compliance to our visual design standards. Finally after the code is reviewed by peers the security layer (Windows Identity Foundation/ADFS) is added into the solution. So you won't be getting the identity claims util you are nearly done - meaning there'll be a lot of code written in each controller (or on its parent) to handle the fact there are no claims. This was also removed as option because every controller object in every application would have to be touched to make this change.

    The final idea was to leverage the Razor engine a bit more than we've normally done. It occurred to us that the changes could occur in one place across all the applications. The beauty behind this approach is that this one place was going to be changed as part of the re-skinning effort anyway. The place to make this change was within the _Layout.cshtml file. Before the @RenderBody() would occur this code was placed in the layout file:
         
            @if (User.Identity.IsAuthenticated)
            {
                // Get the user name from the claims and set it as a hidden input on the page!
                var claimsId = (ClaimsIdentity)User.Identity;
                
            }
            else
            {
                
            }
    
    Basically obtain the user name from the custom claims that is populated after the user has been authenticated. If authentication hasn't occurred send over a default or dummy user account (default user name to be determined as of this writing). Once you have the user name the rest is rather easy - call the menu service upon the document ready, get the links, and add them to the hamburger menu.
         
            // Menu retrieve
            var userName = $("#claims-user-name").val();
    
            $.ajax({
                url: '@WebConfigurationManager.AppSettings.Get("MenuServiceUrl")' + userName,
                type: "GET",
                //crossDomain: true,
                //data: formData,
                success: function (data) {
                    $("#loading").remove();
                    $.each(data, function (key, value) {
                        $("#inlinenavigation").append("
  • " + value.name + "
  • "); }); }, error: function (errorThrown) { // there was an error with the post $("#inlinenavigation").text("ERROR"); } });
    You'll also note there that the Url of the menu service is pulled from the configuration file - this allows different menu services in each environment (Dev, QA, Production) to be invoked

    Monday, October 27, 2014

    JavaScript and Working With the Google Maps JavaScript API

    The Introduction
    I am the first to admit that JavaScript is one of my least favorite languages to work with.  It could have been the inability to really debug the code - unless you count a bunch of "alert('i am here')" within the code.  Or if it was my OCD and experience in software development to that point that abhorred loosely coupled data types and JavaScript felt a bit too loose for my tastes.  It could also have been the poor documentation.  Either way I've arrived at a reasonable working relationship with the language.  I've been given (and found) better tools to debug the script, I've learned to accept the loosely typed nature of the language (and even that has gotten better), and the documentation has gotten significantly better.

    So recently I had a real opportunity to stretch my JavaScript skills beyond simple $.ajax() calls and form validation.  My current employer is attempting to put together a web site where our business customers can examine the status of their services in real-time, put in trouble tickets, and even follow up on status for trouble tickets submitted either by them or on their behalf.  One the visual aspects of this request is the ability to map where those services exist.  What I mean by services is advanced products called circuits used to carry large amounts of voice or data - think between 10MB to 1GB pipes.  The industry term used for the location of these circuits is "A to Z addresses".  You could likely take a guess at what that means - each circuit has a start location and an end location.  If the circuit is simply a connection between the network provider and the customer's location the circuit will only have an "A" location.  The "Z" location is inferred to be at the local office of the network provider.  On the other hand if the circuit is between two different offices it would have an "A" location and a "Z" location.  Each service the customer has installed can be comprised of one to hundreds of circuits.

    The Solution
    The data that is stored about these circuits is used a lot and the quality is extremely high and is stored in a network inventory system (NIS).  In our business it is required to know where a circuit is installed, what equipment is located at the end points, and even if portions of the circuit is being leased from another provider.  So getting the services and the A to Z locations for each circuit is relatively straight forward.  The disappointing part here is that the NIS doesn't store the latitude or longitude of the circuit addresses - so those will have to be looked up.   I decided early on that the Google Maps JavaScript API would be used to display the A to Z points on a map.  Also if the circuit had both an A and Z location that a line would be drawn between the two points indicating that those two markers were connected.

    Because the customer could have several services it was decided early that a map should appear for each service instance and not include circuits associated with a different service instance.

    So first I'll introduce the base classes that were used for the initial prototype.

         
    function MarkerAddress() {
        this.address = null;
        this.description = null;
        this.marker = null;
        this.drawLine = false;
        this.geoCodeResult = null;
        this.drawnLine = null;
    }
    
    function GoogleMapContainer() {
        this.companyMapInstance = null;
        this.serviceObjectId = null;
        this.googleListener = null;
        this.mapElement = null;
    }
    

    MarkerAddress will include the given address, a description that should appear on the Google Map marker, the marker object that appears on the map, an indicator that a line should be drawn to the prior marker in an array that will be stored, and the results from calling the Map API's geoLocation API.

    GoogleMapContainer will contain the element which the map will appear, the service instance Id from the NIS, the Google listener handle, and an instance of the object that will be doing most of the work of looking up (and storing) the addresses for the circuits on that service instance.

         
    // REQUIRES THE underscore.js library to be loaded!
    function CompanyMapInstance() {
        this.googleGeoCodeInstance = new google.maps.Geocoder();
        this.googleMapInstance = null;
    
        // these contain the addresses we passed 
        // along with the extended properties
        // in the geoCode location in GoogleMaps!
        this.addresses = new Array;
    
        _.bindAll(this, "callBackGeoCode");
    }
    

    Finally there is the CompanyMapInstance object. Here is where an instance of the Google Geolocation object and Google Map object is stored. Along with an array of MarkerAddresses located in the addresses array.  You might notice the call to _.bindAll(this, "callBackGeoCode").  I'll talk more about this later.

    I won't go much into the details behind creating the instances of the GoogleMapContainer - I'll just say that a new instance will be created for each service instance the customer has on their account.  There's an array that will contain these so the already created GoogleMapContainer can found later as the customer can display/hide each of the service instances on the main page.

    When a new service instance is requested for display an $.ajax() call is made back to the server to obtain all the circuit addresses.  The addresses are hydrated as objects and are placed into the CompanyMapInstance.addresses array.  Here's the initial version of the success callback that is invoked within the $.ajax() call.

         
    success: function (circuitPoints) {
        var circuitPointList = JSON.parse(circuitPoints);
    
        if (circuitPointList.length > 0) {
            var containerId = circuitPointList[0].ServiceObjectId;
    
            // find the map container for this service instance
            var mapContainer = $.grep(googleApis, function (e) { return e.serviceObjectId == containerId; });
    
            if (mapContainer.length > 0) {
    
                var aCompanyMapInstance = mapContainer[0].companyMapInstance;
    
                for (var i in circuitPointList) {
                    // normalize the data a bit - TODO - this could be better?
                    var aMarkerAddress = new MarkerAddress();
                    aMarkerAddress.address = circuitPointList[i].ALocationAddress;
                    aMarkerAddress.description = circuitPointList[i].Description;
                    aMarkerAddress.drawLine = false;
                    aCompanyMapInstance.addresses.push(aMarkerAddress);
    
                    if (circuitPointList[i].ZLocationAddress != null) {
                        aMarkerAddress = new MarkerAddress();
                        aMarkerAddress.address = circuitPointList[i].ZLocationAddress;
                        aMarkerAddress.description = circuitPointList[i].Description;
                        aMarkerAddress.drawLine = true;
                        aCompanyMapInstance.addresses.push(aMarkerAddress);
                    }
                }
                
                // i've populated the addresses!!!!
                // now mark the points...and here's why the _.bindAll() is important!!
                aCompanyMapInstance .setMarkers();
            }
        }
    }
    

    Once the addresses are populated the CompanyMapInstance method of setMarkers is invoked.  This is displayed below.

         
    CompanyMapInstance.prototype.setMarkers = function () {
        for (var i in this.addresses) {
            var address = this.addresses[i].address;
            this.googleGeoCodeInstance.geocode({ 'address': address }, this.callBackGeoCode);
        }
    };
    

    So for each address the Google geocode method is invoked to find the lat/long.  The CompanyMapInstance method "callBackGeoCode" is registered as the call back method when the address is found.  So now you might have guessed why the _.bindAll is necessary.  This allows the callBackGeoCode method to access the addresses array stored in the CompanyMapInstance object that invoked the Google geocode method.  This allows, once the correct MarkerAddress has been found, to pull the description which is then set to the marker object, assign the marker object to the MarkerAddress instance, and store off the results geocode method.  So the callBackGeoCode method is defined below.

         
    CompanyMapInstance.prototype.callBackGeoCode = function (results, status) {
    
        var captionName = "A circuit point";
    
        if (status == google.maps.GeocoderStatus.OK) {
            if (status != google.maps.GeocoderStatus.ZERO_RESULTS) {
    
                // pull the results...
                var latLong = results[0].geometry.location;
                var geoCoderObjectResult = results[0];
    
                // center the map on the last circuit.
                this.googleMapInstance.setCenter(latLong);
    
                // place the marker on the map.
                var marker = new google.maps.Marker({
                    position: latLong,
                    map: this.googleMapInstance,
                    title: captionName
                });
    
                // now find this address in the array of this.addresses
                var item = this.findAddress(geoCoderObjectResult);
    
                if (item>=0) {
                    // found the address item...
                    // save off the marker!
                    this.addresses[item].marker = marker;
                    // save off the geoCoderObjectResult!
                    this.addresses[item].geoCodeResult = geoCoderObjectResult;
    
                    marker.setTitle(this.addresses[item].description);
    
                    if (this.addresses[item].drawLine) {
                        if (item > 0) { // make sure you aren't the first item in the list!
                            var priorAddress = this.addresses[item - 1];
    
                            if (priorAddress.geoCodeResult) {
                                // only try and draw that line IF you have a geoCodeResult!
                                var pathItem = [latLong, priorAddress.geoCodeResult.geometry.location];
    
                                if (priorAddress.geoCodeResult) {
                                    this.addresses[item].drawnLine = new google.maps.Polyline({
                                        path: pathItem,
                                        geodesic: false,
                                        strokeColor: '#FF0000',
                                        strokeOpacity: 1.0,
                                        strokeWeight: 2,
                                        map: this.googleMapInstance
                                    });
                                }
                            }
                        }
                    }
                }
            }
        }
    };
    

    Mind you there's still plenty of code / testing that needs to place - however, the initial results were quite exciting. And they provided a great opportunity to flex my JavaScript skills.

    Thursday, October 23, 2014

    Upgrading to AD FS 3.0

    I always felt the implementation of AD FS 2.0 was klunky in Windows 2008 and from all appearances was a bolt on.  With the release of 4.5 WIF and AD FS support was built into the .NET framework.  As a bonus part of this upgrade to the .NET framework AD FS was better baked into the Windows 2012 operating system.  Microsoft also made some significant changes to the technology that were impressive and potentially worth the upgrade.

    I was finally given an opportunity to upgrade to Windows 2012 R2 and given a chance to upgrade our STS to use AD FS 3.0.  There were a number of positive changes that you can read on other blogs and from Microsoft's web site(s) on the subject of AD FS 3.0.  Here are a few observations I noted during the process of getting AD FS 3.0 working in our testing environment.

    First, an AD FS 3.0 (Windows 2012 R2) proxy will not work with a AD FS 2.0 (Windows 2008 R2) service.  It was clear during setup that the new proxy server was able to communicate to the old AD FS 2.0 service, however, it wasn't able to save the new proxy settings.  I would always get the error "Unable to save profile."  I am sure that if I used the PowerShell commands I could have gotten a better error message.  This experiment was enough to request another Windows 2012 R2 server where I could install and configure an AD FS 3.0 service.

    Second, before setting up an AD FS 3.0 service (Windows 2012 R2) against a Windows 2008 R2 Active Directory server you have to upgrade the AD data store.  This is documented on Microsoft's TechNet site here. Good news here is that any existing AD FS 2.0 proxy/services will not be affected by this upgrade - you can continue to use them without any issue.  Additionally all the AD management software on your Windows 2008 R2 server will continue to work as expected - likely a given to most but it was something that needed to be tested prior to a production roll out.

    Third, an AD FS 3.0 and AD FS 2.0 proxy/service servers can co-exist without any conflict.  However, you can't load balance them or expect them to behave in a cohesive fashion. You must treat them like two different end points for RP's to send the login requests.  This would be helpful if you wanted to roll out the new proxy/service servers without affecting any existing RP's.  We took advantage of it by slowly migrating existing RP's to the new servers.  Any new RP's would be automatically using the new proxy/service servers.

    Forth, the wizard to set up the AD FS 3.0 service server didn't work for me.  And it isn't clear to me how it would for anyone based upon the PowerShell script it creates while in the wizard.  I had a couple of stumbling blocks that showed up during my experience.  I first needed a certificate that matched the domain in which I was installing the AD FS service.  While I understand using a certificate issued for the same domain is a normal case scenario.  Our AD FS 2.0 instance in the test environment was setup with a certificate issued by a different domain.  I  created and installed a temporary certificate to get past the first set of error(s).  The Install-ADFSFarm commandlet in PowerShell requires the name and credentials of a services account that will be used for the running setup process as well as for access to any MS SQL instance you will be using.  Well those credentials weren't in the PowerShell script nor was there a prompt to ask for them.  When the wizard executed you'd never get prompted for the user / password (this despite the need to provide one during the wizard setup process!).  Using the setup wizard doesn't give you very good (any) error messages that would help you complete the setup if there was an issue.  A great deal of time would have been solved by using the PowerShell environment to begin with.  The certificate problem and the missing credentials were all errors being suppressed by the wizard - it only reported "An error has occurred" with each unsuccessful attempt.  Take my advice and skip the wizard for the creation of the first node in the AD FS Farm and use the PowerShell command.

    Fifth, when installing the AD FS 3.0 service do not upgrade your AD FS 2.0 MSSQL database.  Doing this will effectively leave your AD FS 2.0 installation in a non-functional state.  Use a different MSSQL server (or instance).  Or if in a test environment use the internal MSSQL instance running on the server you are installing the AD FS 3.0 server.  There is adequate warning for this - well at least there is you are paying attention.  The installation process will tell you that it found an existing ADFS data store and that it will be overwritten during this process.  "Overwritten" should be the give away that if you continue you won't be using your AD FS 2.0 proxy/server anymore.

    And finally, the new login screen in AD FS 3.0 prevents most customization.  It allows for some basic changes which you can read about here.  However, overriding the behaviors within the onload.js file, adding any Javascript libraries, adjusting the login page's HTML from within the onload.js file are an "at your own risk affair" and Microsoft will not will provide any support.  This is of course expected - but I found it entertaining when on Microsoft's own site they showed you how to override the default behavior to allow someone to only enter a user name on the login page.  I understand why this was done, but I also found it irritating as it would take a bit of effort to provide the same functionality/customization that was present in our AD FS 2.0 login page.  And trying to make a page look good as a result of adding new elements in Javascript is difficult for the best of web developers.