Saturday, December 7, 2013

log4net.Appender.EventLogAppender and WCF

So I finally cracked the magic behind getting the log4net EventLogAppender to work within a WCF service and very likely any other web related solution.  Initially after pulling my hair out and spending hours pouring over StackedOverflow and log4net postings had I given up any hope of finding a way to get this working.  Most of the solutions offered either sounded scary - for instance like adding the identity of the application pool within the Administrators group on the local machine!?!  Or simply didn't work like the instructions in the documentation over at Apache -which left out crucial step if you are writing WCF or other web projects.  Hopefully if you come across this it might shed some light on the subject, prevent hours of frustration, and get you working.

The documentation over at Apache isn't complete when it comes to setting up log4net to use the Windows event viewer.  While the settings illustrated are precisely correct they leave out a two key ingredients - things which must be done to get this to work.

First there is some meaning behind the log4net proper within the EventLogAppender property called "ApplicationName" which should be set within your web.config.  An example of the confing file can found over at Apache's config examples under the EventLogAppender settings.  However, here is the property that I am specifically speaking about:

<applicationName value="YourAppName" />

This application name actually defines the value that will appear in the "Source" column in the event viewer.

Second, in order to actually write out to the event viewer you need to do some registry work.  Navigate to the following area of your registry (or if you are wanting to write to another section of the event viewer navigate to that):

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application

Within this section (assuming you want the event log's to appear under the Windows Logs/Application section) add a new key that is EXACTLY the same as the ApplicationName defined in above.  Within this key add a new string element called "EventMessageFile" and set the value to:


%SystemRoot%\Microsoft.NET\Framework\v4.0.30319\EventLogMessages.dll

Once this is done reset the application pool and you should start seeing the Application event log entries with the "source" equal to the name of the ApplicationName set in your log4net <appender> group.

In order to avoid having to do all this typing on each of the target servers I'd recommend that you extract the key you added and place that in your project for safe keeping.  This registry file can then be imported into the registry of the target server(s) you are deploying your application.  Adding to the project and eventually to version control will help keep your work around and handy for others.


Friday, November 8, 2013

$.ajax and javascript's FormData object

Wow...I am a slow study when it comes to JavaScript.  I guess there's nothing really hard about it - it's just that I have a real firm distaste for loosely typed objects and variables.  The fact you can add a method to any  variable and not have it complain to you until it is running is just plan frustrating.  I even swear that I've seen code work the first time, the second, and them for some inexplicable reason stop working, only to turn around and work again!  The thing that bugs me the most is how differently each of the browsers handle JavaScript.  jQuery (and others) attempts to decouple the developer's need to know the browser behaviors and does it reasonably well from what I've seen thus far (except of course for IE).  I'm sure as I play with it more that I'll get used to working with JavaScript.

But today...wow...I was stumped with a jQuery ajax call to a Microsoft MVC controller method.  Hopefully this might help you if you run into problems trying to send form data that contains file input types.

First - you really really don't need to use the append() method to add files to the FormData class.  Most of the searches that I did made the statement that in order for this work you needed to use the append() method to attach the file to the form being posted to the web server.  I found that simply calling the constructor of the FormData object with the form element as a parameter will pull in all the necessary data you need.  The constructor not only included data from the elements on the form, but also any input files the user has added as illustrated below.

var formData = new formData($('form')[0]);  

Yes that's all that is needed.  Not being that close to the development/history of JavaScript this could have been different in earlier versions - but as of this writing this is all I needed.

Second - the tricky part and something that took me the better part of a morning to figure out.  The $.ajax() method can in fact sent over the FormData object in a format that can be de-serialized by the  MVC Razor engine into your model.  Here's the code:
$('form').on('submit', function(e) {  
   
   $.ajax({  
     type: this.method,  
     // this works!...dno't mess with it...  
     url: '@Url.Action("SaveOffer", "Offer")',  
     contentType: false,  
     processData: false,  
     data: new FormData($('form')[0]),  
The key to doing this is the contentType and processData properties in the ajax method.  Setting these two properties, along with overriding the url: property to the method (below) in your controller that will accept and process the form data.

 [HttpPost]  
 public JsonResult SaveOffer(OfferModel offerModel)  
 {  
   try  
    {  
      if (ModelState.IsValid)  
      {  
        GetFiles(ref offerModel);  
        _rewardsAdminRepository.UpdateOffer(offerModel);  
   
        return Json(new { result = "ok" },  
         JsonRequestBehavior.AllowGet);  
      }  
      return Json(new { result = "error" },   
         JsonRequestBehavior.AllowGet);  
    }  
When the ajax method invokes the controller's SaveOffer method the Razor engine was able to take the content within the FormData object and fully hydrate the OfferModel.  Once I figured out those two properties where the key I was successfully saving data entered on the form as well as uploading any files attached to the post.

Monday, October 14, 2013

MVC 4 Images

During a recent development cycle I needed to have the ability to display an image on the MVC View which was dynamically pulled from a data source (e.g. not stored in the Content folder).  After some looking around the consensus seemed to be that a Url.Action invoking a method in a controller was the best way to approach the problem.  I didn't at first have concerns about the solution - it was a trivial amount of code and the round trip from the data source and the page didn't seem to make rendering the page any less responsive.
 <div class="editor-field">  
  <input type="file" class="find_file_button" name="offerImage" />  
   
  <image id="offerImage" class="imagePreview" src="@Url.Action("GetImage", "Offer", new { id = Model.ImageId, imageType="Image"}) />  
 </div>  
What you see above is the cshtml file invoking the method of "GetImage" located within the OfferController class.  Some parameters are passed - in this case a type of image (for this solution two images were stored for a each entity) as well as some hint on how to find that right image.  The "GetImage" method is pretty much what you'd expect - as I covered this in a prior entry I won't get deep into what is going on.
public ActionResult GetImage(int id, string imageType)  
 {  
   DisplayImageModel disiplayImageMode 
      = _rewardsRespository.GetDisplayImageModel(id, imageType);  
   
   if ( displayImageModel != null)  
   {  
    return File(displayImageMode.ImageBytes, displayImageModel.ImageMimeType);  
   }  
   return HttpNotFound();  
 }  
What really bugged me about this solution was primarily:
  1. When the cshtml page was being processed by the IIS server it had to stop and make another method call to obtain the image data associated with the model being presented.  This step also invoked another call to the data store to obtain the image.
  2. The model, when it was being populated from the data store, already had the capability to load up the image(s) in a property of type byte[].  Doing this could prevent another call and another dip into the data store.
So really how could I take the already loaded up data in the model and have it display the images on the page?  Well I ran into a solution quite by accident - why not load up the image as part of the page?

In order prevent these irritations a small change to the model is necessary - adding the appropriate properties to store the image data in a byte[] could prevent the need to pull this from the repository as a separate call as well as have it readily available in the model that is passed to the view.  For added compatibility - but certainly not required - the MIME type of the image was also retrieved and placed in the model.
So far pretty easy.  The code in the repository that loaded up the model was altered to pull the image(s) from the data source and convert them to an array of bytes.
 public byte[] ThumbnailImage { get; set; }  
 public string ThumbnailImageType { get; set; }  
   
 public byte[] OffierImage { get; set; }  
 public string OfferImageType { get; set; }  
   
Once the image(s) are loaded in the model it is now necessary to encode and embed those images in the resulting HTML that is downloaded by the browser.
<div class="customeroffer_thumbnail">  
 @{  
   string imageSrc = null;  
     
   if (Model.ThumbnailImage != null)  
   {  
    string thumbBase64 =   
      Convert.ToBase64String(Model.ThumbnailImage);  
   
    imageSrc = string.Format("data:{0};base64,{1}, Model.ThumbnailImageType, thumbBase64);  
   }  
 }  
   
 <img id="thumbnailImage" class="imagePreview" src="@imageSrc" />  
 </div>  

There is really only one step - encode the property containing the image into Base64 encoded string.  Then place that encoded string, along with the MIME type into the <img> tag as illustrated above.  In fact this could be done in the model ahead of time by the repository if you prefer.  Doing that would simplify this code even further and avoid having any in-line code in your cshtml file (after proof reading this - this really looks like some old school ASP code).

The potential downside to this solution is the the resulting HTML file is considerably larger and could take longer to render in the browser.  This wasn't my experience - as the same amount of data has to be downloaded - and it's either in the HTML file or it's another request the browser makes to the web server to pull the image from the file system on the web server.

Sunday, September 29, 2013

Telerik Kendo Grid - What's the drama behind putting a link a grid's cell

After a few hours I was about the end of my rope playing with Telerik's Kendo Grid control.  Seriously...how hard could it be to insert a link into grid that will generate an action, e.g. one of the CRUD operations?  Here's how I finally managed this trick.

The MVC 4 standard approach is rather easy and straight forward.  Really all that is necessary is to post a bunch of Html.ActionLink methods as illustrated in this code snip below.

 @foreach(OfferModel offerModel in ViewData.Model.Offers)  
 {  
   <tr>  
    <td>@Html.DisplayFor(m=> offerModel.Id)</td>  
    <td>@Html.DisplayFor(m=> offerModel.Title)</td>  
    <td>@Html.DisplayFor(m=> offerModel.Description)</td>  
    <td>@Html.DisplayFor(m=> offerModel.Status)</td>  
    <td>  
      @Html.ActionLink("Edit", "Edit", "Offer", new { id=offerModel.Id}, new {@class="edit_button})  
      @Html.ActionLink("Delete", "Delete", "Offer", new { id=offerModel.Id}, new {@class="delete_button})  
      @Html.ActionLink("Copy", "Copy", "Offer", new { id=offerModel.Id}, new {@class="copy_button})  
    </td>  
   </tr>  
 }  

There's a bit more going on here than simply having the action link's place a URL in my table.  I added a class so that these links appear like buttons (standard stuff from jQuery, nothing exciting).

What you end up with is displayed above when the "buttons" are clicked the appropriate controller method is invoked along with the Id so that the right record is either edited, deleted, or copied.

Then there's the Kendo grid - which honestly I'm impressed with.  No having to worry about cross browser compatability, build in sort, filter, and a lot more makes it worth the trouble of trying to figure this out.

There's a lot of postings around putting a link into a Kendo grid.  Some of the ideas presented were pretty decent - sadly they didn't work for me and likely left a few people scratching their heads.  The biggest problem I needed to solve was to be able to embed the "id" of the row in the Html.ActionLink so that the resulting URL would look something like: /Offers/Edit/<id>.

How I got this work was to insert a client template as shown below.

@(Html.Kendo().Grid(Model)  
    .Name("OfferModelGrid")  
    .Columns(columns =>  
    {  
      columns.Bound(p => p.Id);  
      columns.Bound(p => p.Title);  
      columns.Bound(p => p.ShortDescription);  
      columns.Bound(p => p.Status);  
      columns.Bound(p => p.StartDate);  
      columns.Bound(p => p.EndDate);  
   
      columns.Bound(p => p.Id)  
        .Filterable(false)  
        .Title("Action")  
        .Template(@<text></text>)  
        .ClientTemplate(Html.ActionLink("Edit", "Edit", "Offer", new {id = "#=Id#"}, new {@class = "edit_button"}).ToHtmlString() +  
                Html.ActionLink("Delete", "Delete", "Offer", new {id = "#=Id#"}, new {@class = "delete_button"}).ToHtmlString() +  
                Html.ActionLink("Copy", "Copy", "Offer", new {id = "#=Id#"}, new {@class = "copy_button"}).ToHtmlString());  
    }  
    )  
    .Pageable()  
    .Sortable()  
    .Filterable()  
    .DataSource(datasource => datasource.Ajax().Read(read => read.Action("FetchOffers", "Offer")))  
    )  

A number of examples suggested putting a Html.ActionLine between the <text></text> elements - however this didn't seem to affect much, if anything in the results.  The key piece really is the population of the data that needed to be part of the action's URL - namely the #=Id# you see in the forth parameter.  What this is really doing is taking the Id property of the Model associated with the page and placing it into the resulting action link.  The result is exactly what you need - a link which will invoke the indicated method in the indicated controller.  Add in a bit of class and you end up something that looks remarkably like the original MVC screen.


Saturday, September 28, 2013

Interested in returning to software development?

I've been asking myself the same thing periodically.  I've even had a few interviews for a software engineer position where I've been ask "why do you want to return to software development?".  While I won't know the outcome of the most recent interviews - I can illustrate how I avoided ruining my chances at returning to the core of my profession.

Don't Downplay Your Management Skills
Be proud of your accomplishments while at the helm.  Be honest with the challenges and outline both things you'd like to have done as well as the things you could have done better as the manager.  Also point out what appealed to you about management in the first place.  Provide examples of how you showed leadership and what your thoughts are on leadership.  I for instance don't believe that management and leadership are the same things.  Leaders aren't necessary managers - however others respect, listen to, and follow leaders regardless of titles.  Managers can be leaders - if they are doing it right.  Generally managers are culled from the herd because they are the leaders within the development team.

Outline Your Management Philosophy
These things will help the hiring manager know what you are like as well as what you expect from them as a manager (that is if they are a good manager).  Stating that I am not a micro-manager - is bold, but it also makes it clear you don't expect or need to be micro-managed.  State how you view leadership and management - in positive terms.  For instance I look at management as an opportunity to serve the people I work with.  I find their strengths and exploit them.  I find what makes them happy about their job and attempt to give them more of those types of opportunities.  I ignore their weaknesses and try not to make them strengths (yes, because their weaknesses are their problems, not yours or your organizations).  I praise in public and reward discreetly as is needed for the person's personality.  I support them in their job.  And unless they exhibit behavior that needs to be handled by HR - I support them publicly and in private meetings with manager/director peers.  I chasten  and coach in private and (mostly) without any anger.  I remember my place - I will never look good unless the people I serve are happy and productive.

Be Clear On What Can Do and What Your Weaknesses Are
I am good at back end work.  I can put together an API.  I can also write code against and provide a protective layer against someone else's API.  During my stint as a software engineer my second manager would ridicule my mad skills on UI design - more than a few times and mostly in public.  He's a good friend, and I'm sure he didn't have any malice regarding this comments.  Sadly those comments stuck with me - and I am pathetic at HTML and CSS because I've never learned or bothered to learn how to make something look nice.  But remember when I stated above that I leveraged other people's strengths?  So I've relied on a web designer to clean up my look and feel once the core functionality was in place.

Bottom line to all of this? Don't lie and accentuate what you can bring to an organization.


Friday, September 20, 2013

MVC 4 - multiple image/file uploads and viewing images

Having tinkered with MVC 4 in Visual Studio 2012 I hadn't really developed an application that solved enterprise requirements.  Finally an opportunity at work came my way and provided an opportunity to use the MVC design pattern and its implementation in Visual Studio 2012.

The business requirements were rather easy.  The marketing team desired to offer rewards to our subscribers.  These rewards could be a coupon for a discounted yogurt at a local business or even free USB thumb drive with the company logo prominently displayed.

Outside of having to store the basics (e.g. titles, descriptions, start, and end dates) required for this project it was required that different types of images be stored and then later displayed.  The first image was smaller and its primary purpose was to display a small graphic of the company logo that provided the offer.  A larger image could also be added which contains a coupon or other image which needed to be displayed when the offer was redeemed by the customer.

So the first two problems arose - I wanted to be able to upload and display both images on the same form.

The image display actually was rather easy.

<div class="editor-label">  
   @Html.LabelFor(m => m.ImageId)  
 </div>  
 <div class="editor-field">  
   <img id="offerImage" class="imagePreview" src="@Url.Action("GetImage", "Offer", new {id = Model.ImageId, imageType="Image"/>  
   <input type="file" name="offerImage" />  
 </div>  
   

The code do this in the cshtml example above.  By embedding a @Url.Action in the src tag of an image element the server will invoke the method GetImage within the OfferController class.  There is also some opportunity to pass specific parameters so that each image element will display the correct type of image associated with the offer.

What wasn't as intuitive was what type of ActionResult needed to be returned by the OfferController class.  After some hunting around and experimentation I ran into the File function which uses as parameters the image (in a byte[] array) and the MIME type as illustrated below.

 public ActionResult GetImage(int id, string imageType)  
 {  
   DisplayImageModel displayImageModel   
    = _rewardsRepository.GetDisplayImageModel(id, imagetype);  
     
   if ( displayImageModel!=null )  
   {  
    return File(displayImageModel.ImageBytes, displayImageModel.ImageMimeType);  
   }  
   return HttpNotFound();  
 }  

Once that was squared away I was able to successfully display images from my data source.


The real the real trouble came with desire to upload more than one image on the same cshtml form.  There doesn't seem to be anything built into the MVC 4 implementation for this type of operation.  The first thought was to simply adjust expectations and create link that would collect these images in a separate view.  This really didn't provide the polished experience desired by the marketing team and it felt amateurish.

In order to upload any files during the HTTP POST a small change needs to take place in the cshtml file's BeginForm declaration.  The key needed for this is to ensure that enctype of "multipart/form-data" is set as shown below.

 @using (Html.BeginForm("Edit", "Offer", null, FormMethod.Post, new ( offerModel = ViewData, enctype="multipart/form-data"}))  
 {  
   @Html.ValidationSummary(true);  

For those used to ASP.NET and HTML this should be easy for you to grasp why this was needed.  However, as the problem was researched further a number of sites indicated that the method invoked when the html form posted needed to have an IEnumerable<HttpPostedFileBase> parameter.  As it turns out this wasn't at all correct.  In my experience this parameter was always NULL.  Anyone making this statement clearly hasn't actually run their 'example' code.  It didn't seem if any built in parameter would provide what was needed in order to peel off the files.  What is listed below seemed to have worked.  However, it isn't even clear that having this as parameter is really required.  In fact as I experimented even further - this parameter isn't required at all as illustrated below.  It certainly can't be used to obtain more than one file from the post data.

     [HttpPost]  
     public ActionResult Edit(OfferModel offerModel)  
     {  
       try  
       {  
         if (ModelState.IsValid)  
         {  
           GetFiles(ref offerModel);  
   
           _rewardsAdminRepository.UpdateOffer(offerModel);  
   
           return RedirectToAction("Index", "Offer");  
         }  
   
         SetSelectList(offerModel);  
         DecodeHtml(ref offerModel);  
   
         return View(offerModel);  
       }  
       catch (Exception exception)  
       {  
         _log4Net.Error("Edit(POST)", exception);  
         return ProcessError(exception.Message);  
       }  
     }  

Then it occurred to me that the Request object is in scope during any method with the [HttpPost] attribute.  And that this Request object contains a property called Files that holds the names of each of the file input elements on the html form. Rather than clutter up the Edit method above and because a Create method would also need the ability to obtain the files from the Request object a private method called GetFiles was created.  Its implementation is listed below.

     private void GetFiles(ref OfferModel offerModel)  
     {  
       foreach (string fileName in Request.Files)  
       {  
         HttpPostedFileBase hpf = Request.Files[fileName];  
   
         if (hpf != null && hpf.ContentLength > 0)  
         {  
           if (fileName.Equals("thumbnailImage"))  
           {  
             DisplayImageModel imageModel = GetImageData(hpf);  
             offerModel.ThumbnailBytes = imageModel.ImageBytes;  
             offerModel.ThumbnailMimeType = imageModel.ImageMimeType;  
           }  
           else if (fileName.Equals("offerImage"))  
           {  
             DisplayImageModel imageModel = GetImageData(hpf);  
             offerModel.ImageBytes = imageModel.ImageBytes;  
             offerModel.ImageMimeType = imageModel.ImageMimeType;  
           }  
           else if (fileName.Equals("customerListFile"))  
           {  
             // we have a list of customer's, e.g. main bill numbers...  
             List<long> mainBillNumbers = GetCustomerList(hpf);  
   
             if (mainBillNumbers != null && mainBillNumbers.Count > 0)  
             {  
               foreach (long mainBillNumber in mainBillNumbers)  
               {  
                 offerModel.CustomerList.Add(mainBillNumber);  
               }  
             }  
           }  
         }  
       }  
     }  

As you can see the you can iterate through the Files collection and get a clear understanding of which element is sending a file in the post data.  As it happens the fileName represents the name provided to the input type element in the cshtml file.  Look at the declaration of the input elements below and compare it with the code above.

     <div class="editor-group">  
       <div class="editor-label">  
         @Html.LabelFor(m => m.ThumbnailId)  
       </div>  
       <div class="editor-field">  
         <input type="file" id="upload_ThumbImage" class="find_file_button" name="thumbnailImage" />  
         <p>  
           <img id="thumbnailImage" class="imagePreview" alt="No Image Selected" src="@Url.Action("GetImage", "DisplayImage", new {id = Model.ThumbnailId, imageType = "Thumbnail"})"/>    
         </p>  
       </div>  
     </div>  
   
     <div class="editor-group">  
       <div class="editor-label">  
         @Html.LabelFor(m => m.ImageId)  
       </div>  
       <div class="editor-field">  
         <input type="file" id="upload_OfferImage" class="find_file_button" name="offerImage" />  
         <p>  
           <img id="offerImage" class="imagePreview" alt="No Image Selected" src="@Url.Action("GetImage", "DisplayImage", new {id = Model.ImageId, imageType = "Image"})"/>   
         </p>  
       </div>  
     </div>  

The code that actually pulls the image file data and saves it is implemented in the GetImageData method illustrated below.  This method also does some checking to ensure that only certain images types are used.
     private DisplayImageModel GetImageData(HttpPostedFileBase imageFile)  
     {  
       if (imageFile.ContentType.ToLower().Equals("image/jpeg") ||  
         imageFile.ContentType.ToLower().Equals("image/png"))  
       {  
         using (MemoryStream ms = new MemoryStream())  
         {  
           imageFile.InputStream.CopyTo(ms);  
           DisplayImageModel imageModel = new DisplayImageModel  
           {  
             ImageMimeType = imageFile.ContentType.ToLower(),  
             ImageBytes = ms.ToArray()  
           };  
   
           return imageModel;  
         }  
       }  
       return null;  
     }  

Friday, July 19, 2013

Mail Chimp 2.0 API Interface

As part of my position I have opportunity periodically to suggest solutions that will help the marketing and communications team streamline their operations.  I had such an opportunity to recently when I made a pitch to use third party to manage our customer communications.  I looked at two solutions Constant Contact and Mail Chimp.

Both of these solutions do essentially the same thing - they provide the ability to manage emails in different lists and provide the recipients the ability manage their subscriptions.  The customer base would be initially subscribed to four lists - System Outages, Product Updates, New Product Sales, and Event Notifications.  One of the features the marketing team wanted was the ability to target specific users within each mailing list.  For instance, having the ability to target a group of customers in the Product Updates list that currently have an earlier release of the product and are now eligible for reduced rate upgrade.  Sending this notification to the entire list of customers would have adverse results on the company's bottom line.  The ability to segment the mailing list into smaller and more specific groups is a feature that allow users from having to maintain and manage a myriad of lists for specific customer segments.  Imagine a new mailing list having to be created for each segment of the customer population and having to merge and maintain the "opt out" lists between them.  It will also prevent the customer from having to navigate (potentially) a huge number of mailing lists and unsubscribing them - all the while being added to more as they fall into newly created segments.

The intention of this entry isn't necessarily to point out the weaknesses or strengths of the Mail Chimp and Constant Contact products (although it comes up a bit).  Rather it is to partially illustrate the solution that was created for the interface to the production was selected for a deeper analysis.  After some research and playing with the Mail Chimp and Constant Contact products I settled on Mail Chimp for a deeper analysis.  This was because of one primary reason - they have list segmentation built into their tool.  

My first attempt at the API didn't give me much hope that final product would be easy to maintain.  It also had the side affect of having duplicated code within every method.  As illustrated below in the Ping and Account Details methods located the "helper" section of Mail Chimp's "REST-like" interface.

 public AccountDetail AccountDetails()  
 {  
   string jsonObject = this.PostRequest("/helper/account-details", this.JsonApiKey);  
     
   AccountDetail accountDetails = JsonConvert.DerializeObject<AccountDetail>(jsonObject);  
   
   return accountDetails;  
 }  
   
 public string Ping()  
 {  
   string jsonObject = this.PostrRequest("/helper/ping", this.JsonApiKey);  
   
   Ping ping = JsonConvert.DeserializeObject<Ping>(jsonObject);  
   
   return ping.Message;  
 }  
  

From this small example you can see that with each method I'd be having to serialize the parameters within the PostRequest method.  I'd also have to produce several PostRequest methods handling all cases of the different parameters or worse modify the method to handle dynamic parameters.  Not to mention that with each method call I'd be having to deserialize and handle exceptions in the Json return.  So I made my first adjustment to the PostRequest method which provided some relief to these dilemmas.

 protected T PostRequest<T>(string method, string payload)  
 {  
   Type type = typeof(T);  
   
   if ( !type.IsDefined(typeof(JsonObjectAttribute), false) )  
   {  
    throw new InvalidOperationException("Object type invalid");  
   }  
   
   string jsonObject = PostRequest(method, payload);  
   
   return JsonConvert.DeserializeObject<T>(jsonObject);  
 }  

So now I can simplify the Ping method and remove the need to deserialize the data being returned from the Mail Chimp methods.

 public string Ping()  
 {  
   Ping ping = PostRequest<Ping>("/helper/ping", this.JsonApiKey);  
   
   return ping.Message;  
 }  

This doesn't completely solve my problem, as I discovered later, as I needed to send over more than just the API key in all but the most simple of methods.  This required another change to PostRequest method.

 protected T PostRequest<T>(string method, Object jsonParam)  
 {  
   Type type = typeof(T);   
     
   if ( !type.IsDefined(typeof(JsonObjectAttribute), false) )   
   {   
    throw new InvalidOperationException("Object type invalid");   
   }  
   
   Type parameter = jsonParam.GetType();  
   
   if ( !parameter.IsDefined(typeof(JsonObjectAttribute), false) )   
   {   
    throw new InvalidOperationException("Parameter Object type invalid");   
   }  
   
   string jsonParameterString = JsonConvert.SerializeObject(jsonParam);  
   
   string jsonObject = PostRequest(method, jsonParameterString);  
   
   return JsonConvert.DeserializeObject<T>(jsonObject);  
 }  

This change would allow the users of the API interface to pass any object with a JsonObject attribute (I used the Json.NET assembly by Mr. Newton-King) into the PostRequest method.  PostRequest will check to ensure the object is valid, serialize the object to a string, and pass it over the wire to Mail Chimp.

Please note I realize the PostRequest still needs a lot of work - and there has been a lot of work to it and the interface over all during the evaluation period.  The final method also implements more error checking and reflection in order to make the interface work.

I had originally considered providing this code over at CodePlex.  A couple of things happened before I did this however.  First, I was working against the (at the time) BETA version of the 2.0 API.  It was clear the API needed more work and polish before I could consider it ready for prime time.  Second, the developers over at Mail Chimp, as was necessary for a pre-release product, would make breaking changes to the API.  Notifications would be made - but only within the various threads of their forums.  I wasn't a regular scanner of the forums and didn't read every thread over to catch them and thereby had to react once something broke in my unit tests.  Additionally the breaking changes wouldn't be documented quickly or at times at all -meaning that I'd trip across something and not find it on the forums anyplace.  This may sound a bit critical, which isn't the intention, but for a BETA product that was released to the general public it seemed a bit immature to create a semi-stable solution for evaluation.

The final issue was more of a design challenge within Mail Chimp that didn't fit into our requirements.  It seems that when a customer opts out from a list they are completely deleted from the database within Mail Chimp.  With Constant Contact the customer is retained and marked in a fashion that prevents the campaigns from sending out any more correspondence.  Our desire was to continually refresh the lists with customer information - we didn't want to give any thought to who was already added, who opted out, and was never added before.  We also wanted to continually update the properties of a customer so that segments could be dynamic within each of the lists.  As it stood at the time of this writing I was able to re-add a customer to a list where they had previously opted out.  Once they were re-added I was successful in sending emails to that customer again.  This essentially would put us into the position of having to maintain our list of customers that have opted out - something that is a core feature of any tool that we decided to use.

Tuesday, June 18, 2013

MFJ-1270B TNC

A friend of mine has threatened for the past twelve months to find and then purchase for me a old TNC.  He finally carried through on that threat and found one at the Reno, NV HAM swap this in May of 2013.  He found a MFJ-1270B (manual here).  For those that don't know what a TNC - think data connectivity over the airwaves.  Prior to cell phone data plans and even before WiFi the TNC device was one the few ways to share data over the airwaves.  I won't bore you with other details as there are other sources of the history and use of TNCs.

When I first started in on the TNC I discovered a few things.

First, this device is controlled by TWO Z-80 processors.  These weren't the cheap knock offs either - these were actually made by Zilog back in the day.  I have a special place in my heart for the Z-80 - I first learned assembly with the Z-80.  I recommend learning assembly to any software developer who has interest in programming micro controllers.  Also this knowledge has provided some utility even during my Windows 3.1 days programming in Borland C/C++.

Second, I don't own a serial DB25 cable.  Not even a USB/DB25 cable.  I do own a USB/DB9.  Pretty sure this won't cause a problem - reading over the documentation of the TNC confirmed this.  Luckily getting blank tips and soldering a DB25/DB9 cable turned out to be the easy part of this exercise.

Discovered that the connector in the back of the TNC is essentially a DIN 5 pin - the same used in a "midi" interface connector.  The sad part here is that MIDI only requires three pins to be active so my old sound blaster joy stick/MIDI connector won't work as plug.  Kind a of a bummer as I had already cut the old SoundBlaster cord when I discovered this problem.  I checked a MIDI patch cable I had laying around, it too only has three of the 5 pins connected.  

My radio - a FT-897D has a DIN6 interface.  Finding blank DIN 6 and DIN 5 tips weren't difficult - discovered them at Fry's Electronics for under a quarter.  As you can't simply go buy a cable like this you have to find, create, or re-purpose cables/tips from other sources.  While I found the right connectors at Fry's electronics I quickly discovered that my soldering skills weren't up to the challenge of putting an 18 gauge wire on a flat tipped soldering post (seriously who thought that would even work) - well not without bridging the DIN 5 or 6 posts together.  I ending up buying a pre-built cable from packetradio.com.

My first attempt and setting up packet radio sadly failed.  I was successful in getting the device to work in "local" mode I discovered that I simply wasn't able to connect properly.  Thinking I might have to make some adjustments I discovered this site: http://ohiopacket.org/index.php/Calibrating_Audio_Tones_for_MFJ_TNCs which provided some great tips in tuning the MFJ-1270B for optimal functionality.  I discovered my audio tones were out of sync and was successful in getting those corrected.

Despite the confirmation that the tones were now properly tuned and that they were sufficiently loud enough (confirmed by a friend in nearby Lincoln, CA) I just couldn't hear myself on the air using packet radio.  I even checked to see if the audio coming from the radio was strong enough for the TNC to pick up and process.  No matter what adjustments I made I just couldn't get connected.  While I haven't given up on making this work, I am pretty convinced that I need to look at another device or another method of doing packet radio.

Sunday, May 26, 2013

SeeedStudio TFT v2...

Finally had a chance to sit and play with the SeeedStudio TFT v2.0.    I will be playing with this device a bit more for but for now I was able to examine the their API and run through a few of their demo application.

First a look at some of the sample applications.  I looked at two of them.  The first is the "paint" program.  It demonstrates the basic functionality of the TFT display.  The first mistake I made that is that the library provided by SeeedStudio actually comes in two parts.  The first part is called "SeeedTFTv2" - upon attempting to build the "paint" it was clear that I also needed to download the "SeeedTouchScreen" library to enable any of the touch functionality.  I was a little perplexed by this, it is something that I have to dig into a bit more as it seemed as if all the functionality to get user input from the screen is located in the "SeeedTouchScreen" library.  While the "SeeedTVTv2" contains the code the write to the display.


Not to be thwarted (despite this curiosity) I pressed forward.  As shown above you can see sample program as run on an Uno device.  In order to draw anything resembling a line the user will have to take their time and slowly draw the line across the screen.  I can't blame the library or the screen - this is more of a function (it seems to me) that checking the screen press occurs with each clock tick on the Arduino.

I then went on took look at the "shapes" sample program.  Show below.


I adjusted this program so that when the screen was pressed rather than draw the circles from the inside to the outer edges it would draw them from the outer edge to the inside.  Again, one must press for a bit until the program registers the press and action changes on the screen.  Below you can see my changes.  The "Point" object is only available from the "SeeedTouchScreen", it collects to the "z" value - or the pressure of the finger press.



So I am left wondering how best to use this device.  I need to see if this device will generate interrupts to make the screen more responsive to user inputs or I might have to adjust the pressure to see if that will increase the responsiveness.  As it stands I am not sure I could use this as a means to quickly stop a function or motion of robot that needed to stopped "immediately".

The graphics are reasonable for a screen this size and offer up a decent quantity of (65K) of color choices in a  320x240 pixel display area. 

Saturday, May 18, 2013

Maker Faire and the next Arduino project...

There's been a lull in my Arduino programming as of late.  Seems the last time I had some passion around building something with that platform was about 3-4 months ago when I purchased a number of parts to build a dual-axle controlled track system using the Adafruit motor shield.  While this is still on my workbench I got frustrated with Tamiya dual transmission kit - mine seems to have a problem where one axle spins more freely than the other.  This, if you have guessed, will cause the track system to "pull" in one direction, much like when your car is out of alignment.  Since I wasn't using a stepper motor I couldn't correct the problem with micro adjustments to align the tracks in the direction I wanted to go.  I am not even convinced I could do this without a compass censor and a stepper motor.  I will take a look at this in a few weeks and see what can be done about it.  In the mean time I managed to find something more interesting to do - something I found at this weekend's Maker Faire.


Yes I am aware TFT displays aren't something new.  But this one looked interesting.  It pushes off the processing of the display from the main CPU and provides a SPI interface. This prevents burning up a number of pins on the Arduino - which for some projects can be a nice feature.  I was able to get this one from Seeed Studios for a reasonable price and it seems to be well supported by the vendor.  In fact the reps from Seeed Studios brought with them a working GPRS cell phone built on the Arduino platform that used this screen for the key pad.  What was even more wild was they built an enclosure for this project using a 3D printer (yea, a 100% contract free, open hardware, open source, cell phone).  I haven't yet decided what to do with this screen - maybe an easy game like Simon or Tic-Tac-Toe? 


Wednesday, May 8, 2013

Why use System.Runtime.Caching.MemoryCache?

I ran across a problem this week while putting in the finishing touches of a data access layer for a series of WCF services.  These services needed to pull information from an IBMi (AS/400) "database" (e.g actually the pre-1980's file system).  I won't go too far into the problem, but the need was I had to cache information regarding what environment and/or library to invoke the program running on the IBMi.  This information I decided was likely best stored in a type Dictionary<string, string> - as I could look up the name of the program and easily retrieve the library that I needed be in to invoke the program.

So then, how best to cache this information between successive calls to a WCF service?

Well, as it turns out the IIS worker process in version 5.1 and greater is quite handy.  So long as the application pool isn't recycled either from a time-out or by manually recycling the pool the object you define as your ServiceContract will remain in memory.  Meaning any object bearing the static definition within this object will remain, well, static.

So my first thought on solving this problem was to simply define a Dictionary<string, string> object as a private static member of my ServiceContract class, load it up, and then use it within any object that my ServiceContract class needs into order to its work.  Basically something like this.

 public class Service1 : IService1  
 {  
   private static Dictionary<string, string> myDictionary = new Dictionary<string, string>();  
   
   public string GetData(int value)  
   {  
    Service1.myDictionary.Add(myDictionary.Count.ToString(), value.ToString());  
    return string.Format("You entered: {0}", value);  
   }  
 }  

See any problems yet?

If your guess was that in order to use the static "myDictionary" object I would need to carry the object to each and every object that requires access to the list of the items within the dictionary then you can immediately see my folly.  If you aren't planning on a having many (or any) helper objects then this really isn't a big deal.  The problem I faced was that the ServiceContract object  of the WCF services I was coding for was about 8 or 9 layers above my data access layer.  I was really going to be popular changing all the objects between the top most ServiceContract and my object - especially since ALL of the objects didn't really care about or need the contents of the Dictionary class.

Enter the MemoryCache

MemoryCache is an object located in the System.Runtime.Caching assembly.  This object was added a few years ago with the release of .NET 4.0.   It never really caught my attention until this problem came up - but essential it is a way to store away any objects you might need someplace else.  Or put another way, if you are an old C programmer (like me), it is a way you can tuck away global variables (or objects in this case) for reuse in other places in your application.  The API for this object can be found here.  Also a quick search on this object will provide some information and other sample code on its use.  The purpose here really isn't to describe the API but rather to provide a practical example of its use.

So on we go with my first demo project that began using the MemoryCache object.  My first set of changes really just swapped out the Dictionary object with the MemoryCache object.  While I didn't solve my problem identified above, this step provided me with some understand of how this object worked as I was successful in creating, storing, retrieving, and updating the Dictionary object with successive calls to my test WCF service.  Below outlines the changes I made to my "GetData" method.  

 public class Service1 : IService1  
 {  
   private static ObjectCache myCache = MemoryCache.Default;  
   
   public string GetData(int value)  
   {  
    Dictionary<string, string> myDictionary = (Dictionary<string, string>)Service1.myCache.Get("ALIST");  
   
    if ( myDictionary==null )  
    {  
      CacheItemPolicy policy = new CacheItemPolicy();  
      policy.Priority = CacheItePolicyPriority.Default;  
      myDictionary = new Dictionary<string,string>();  
      Service1.myCache.Set("ALIST", myDictionary, policy);  
    }  
   
    myDictionary.Add(myDictionary.Count.ToString(), value.ToString());  
   
    return string.Format("OK");  
   }  
 }  

I also added a new method that retrieved a complete list of Dictionary items to the client so I could keep checking if the MemoryCache (and its embedded Dictionary object) would retain the information I needed to keep around.

 public List<Avalue> GetAllCalls()  
 {  
   List<Avalue> values = new List<Avalue>();  
   
   Dictionary<string, string> myDictionary = (Dictionary<string, string>)Service1.myCache.Get("ALIST");  
   
   if ( myDictionary != null)  
   {  
    foreach(var pair in myDictionary)  
    {  
      Avalue aValue = new Avalue() { CallTime = pair.Key, CallValue = pari.Value };  
      values.Add(aValue);  
    }  
   }  
   
   return values;  
 }  

So once this second version of my prototype was working I went to see if another object, which would be put in and out scope during another method call of my service, could get a handle to the same Dictionary object and update it for me.  If this could work then I could avoid having to change all the objects between my data access layer and the ServiceContract object.

In order to test this I created another object within the WCF project called "DoSomething"  which is defined below.

 public class DoSomething  
 {  
   private ObjectCache doSomethingCache;  
   
   public DoSomething()  
   {  
    doSomethingCache = MemoryCache.Default;  
   }  
   
   public void DoIt(int value)  
   {  
    Dictionary<string, string> myDictionary = (Dictionary<string, string>)doSomethingCache.Get("ALIST");  
   
    if ( myDictionary!=null )  
    {  
      myDictionary.Add("DoSomething " + myDictionary.Count.ToString(), "DoIt" + value.ToString());  
    }  
   }  
 }  

You'll notice that within the constructor of this object it is obtaining a handle to the MemoryCache object.  And then it uses this handle, within the DoIt method, to pull out a Dictionary object where it then adds an entry.  You'll also notice that the handle to the MemoryCache goes out scope along with lifespan of this object.

The final version of my service adds a new method called SetNewItem.  This method creates a DoSomething object, calls the DoIt method and returns, causing the DoSomething object to go out of scope and eventually get picked for garbage collections.

 public string SetNewItem(int value)  
 {  
   DoSomething something = new DoSomething();  
   
   something.DoIt(value);  
   
   return ( "OK");  
 }  

You then see that I didn't need to pass around the Dictionary or the MemoryCache object and that the DoSomething object appears to be capable of obtaining a handle of Dictionary object.

So, the question is did it actually work?  Did the MemoryCache manage to stay around between calls?  Was the Dictionary object populated correctly?  The answer if is course yes.  Below is a screen shot of the WinForms application which calls the Service1 web service.  I invoked the GetData method a few times as well as the SetNewItem method.  Each of these calls were given a random number between 0 and 42 by the client.  After calling these methods a number of times I requested a full list of the contents of the Dictionary object to display within a list box on the screen.


As illustrated above you see can that the Dictionary object is alive and providing me a cached list of items that I can pull from pretty much any place within my running process.  Once the application pool was recycled I pulled down the list again.  However, because the pool was recycled there were no more entries in the Dictionary.