WebCenter 14.2.0 release notes

Additions from 12.8.1 to 14.2.0 (Version number now inline with database version)


  • Add new user config to enable and disable the order candidate reviews.
  • Change order candidate reviews to not show candidates with a ‘WCandidate’ status.

Bug Fixes from 12.8.1 to 14.2.0


  • Updated the education, work experience and skills pages to work with the new 14r table schemas.


  • Updated some of the vendor procedures to work with the new 14r table schemas
  • Fixed some of the error messages to have the appropriate color.
  • Updated the label of the delete timecard button on the time entry page.
  • Updated our W2optin page to work with new tables.

WebCenter 12.8.1 release notes

Additions from 12.8.0 to 12.8.1


  • Added a new Order Request Workflow system for customers that need to have order requests approved by other contacts before the order can be filled.
  • New user configs have been added to work with the new order request workflows.
  • New notifications have been added to work with the new order request workflows.
  • Order request reviewer statuses and order request event history have been added to the Order Details page.
  • Added links to the task page on the customer Order Search and Timecard Dashboard pages to notify contacts if they have any pending reviews.
  • Added functionality to the Candidate Details page that will bring you right to the download manager if the employee only has one resume.

Bug Fixes from 12.8.0 to 12.8.1


  • Fixed the work locations list box so it will not duplicate items in the list.
  • Fixed which notification gets sent out when an applicant is rejected.
  • Fixed a bug when gathering the last four digits of and SSN.
  • Fixed a bug when gathering the question id's of wrong answers on the questionnaire.


  • Fixed a JavaScript bug on the Payroll History page.
  • Fixed a bug in the timecard template config page that was causing the cost code to always show in the preview window once you have viewed a timecard template that had shown the cost code.

Bridging Java to .NET or how to lose several days of your life

I’ve been spending time for the past three weeks on and off trying to figure out how to use old Java code in one of our .NET projects. A goal for a couple of upcoming clients is to better facilitate the generation of tax and various government forms. With the many thousands of forms we need to support we didn’t want to design and maintain these forms in-house. For a solution we turned to our primary dead tree form provider Nelco.

Nelco provides a PDF form package but unfortunately the PDFs are not AcroForm compatible. You have to use their XML/PDF form merging software solution. More unfortunately their solution is a 10 year old Java package SDK with no source code. I have nothing against Java but here at TempWorks we settled on .NET many years ago and I am not a fan of mixing development platforms. For the past 9 years .NET has provided everything I need to get my job done. So here is my problem, how to get this Java package to work with our .NET code without any weird hacks and make it easy for future TempWorks .NET developers to maintain.

I spent a few days trying various packages without much luck. A few worked fine but they were expensive from a royalty perspective or were more cumbersome then just using Java and writing a web service for communication. Finally I came across an open source solution called IKVM.NET. The cool thing with this solution is you can use IKVM to convert compiled Java code (classes or jar files) and convert it to .NET compiled assemblies. After a few days trying to find the magic IKVM’s command line recipe and fighting with Java class paths, success!

Example Java code from Nelco’s SDK

// This code does the merging of an xml file and a pdf file.  The fields 
// are loaded from the Xml document by the FormBean "utility" into the 
// FormFields object named "inputFields".  The fields are merged into an 
// existing pdf file specified by the variable "pdfFile" by the PDFMerge 
// object named "pd".  The resulting pdf object with the merged xml fields 
// is the Pdf object named "pdf".  An new FileOutputStream is created 
// using the output pdf name and then the function writeToStream is called 
// to write the merged pdf document to disk.
public Pdf PdfFileMergeXmlFile(String xmlInputFile, String pdfFile) {
  // The following code just tweaks the necessary file names.
  xmlInputFile = xmlInputFile + ".xml";
  String pdfOutputFile = pdfFile + "_1_OUT.pdf";
  pdfFile = pdfFile + ".pdf";
  // Now create the necessary Pdf and FormFields objects.
  Pdf pdf = null;
  FormFields inputFields = null;
  // The following string and object are used to read the XML and
  // create a FormField object.
  String xmlbuf = null;
  FormBean utility = new FormBean();
  // for purposes of this example, default data is read from a file (xmlInputFile)
  // and merged with the PDF represented by pdfFile.
  try {
    // The following block of code reads the input data/fields from the XML
    // data file stored on disk.
    xmlbuf = utility.getXML(new File(xmlInputFile));
    inputFields = utility.getFormFieldsInput();
    System.out.println("- Loaded input data from: " + xmlInputFile);
    // We use the PDFMerge object to merge a FormFields object and a Pdf object.
    PDFMerge pd = new PDFMerge();
    // So, here we are making the call to do the merge.
    pdf = pd.merge(pdfFile, inputFields);
    System.out.println("- Data successfully merged with " + pdfFile);
    // Here we are writing the PDF file out to disk.  First, we get a new
    // FileOutputStream with the desired file name.  Next, we call the Pdf's
    // writeToStream method to write out the Pdf.
    FileOutputStream fos = new FileOutputStream(pdfOutputFile);    
    System.out.println("- Wrote new PDF file " + pdfOutputFile + " successfully");
  } catch (Exception ex) {
  return pdf;

The resulting .NET assembly generated by IKVM in Reflector.


My C# console application using the above Java example code.

using System;
using com.etymon.pj;
using com.nelco.form;
namespace ConsoleApplication1
    class Program
        static void Main(string[] args)
            string xmlInputFile = @"E:\Temp\Nelco\pdfdemo\79411.xml";
            string pdfFile = @"E:\Temp\Nelco\pdfdemo\79411.pdf";
            string pdfOutputFile = @"e:\temp\79411_OUT.pdf";
            var pdf = PdfFileMergeXmlFile(xmlInputFile, pdfFile, pdfOutputFile);
        private static Pdf PdfFileMergeXmlFile(String xmlInputFile, String pdfFile, String pdfOutputFile)
            Pdf pdf = null;
            FormFields inputFields = null;
            string xmlbuf = null;
            FormBean utility = new FormBean();
            xmlbuf = utility.getXML(new java.io.File(xmlInputFile));
            inputFields = utility.getFormFieldsInput();
            Console.WriteLine("- Loaded input data from: " + xmlInputFile);
            PDFMerge pd = new PDFMerge();
            java.io.FileOutputStream fos = new java.io.FileOutputStream(pdfOutputFile);
            pd.merge(pdfFile, inputFields, fos);
            Console.WriteLine("- Wrote new PDF file = " + pdfOutputFile + " successfully");
            return pdf;



Kudos to the main guy behind IKVM.NET, you saved me a bunch of work.

Prototyping the next generation mobile web application

For many years TempWorks has been selling our TempWorks Mobile product. It has been very successful with people on the go who want quick access to their TempWorks data. The product has gone thru many versions but never has kept up with the capabilities of today’s mobile phone. With the arrival of the iPhone, HTC handsets, Android, and Palm Pre phones you can do so much more than ever before.

I’ve been wanting for almost a year to revamp TempWorks Mobile and update it for the newer phones. Unfortunately large blocks spare time for development is a luxury I don’t have very often. Although about a month ago I decided to move some of my projects around to start prototyping the next generation of Mobile. A couple of reasons I did this, first I wanted to get it done because honestly I am tired of looking at an user interface that is ancient compared to mobile web-app standards today and I wanted to get my hands dirty with technology I haven't used before.

During the lifetime of Mobile I noticed a lot of people liked to use it on their desktop for quick access. I think that is a great idea because sometimes you need a quick number or bit of info. The only downside to old Mobile is that it didn't adapt to its environment. You saw the exact same web pages on your desktop as you did on your mobile phone. What a waste of real estate. Then on flip side you add more features to use the real estate but then it is overwhelming on the mobile phone. With this dilemma as one of primary development points I went to work. I chose to use ASP.NET MVC because its basic architecture solves one of my primary development points and I haven't used it before.

ASP.NET MVC is based on the MVC (Model-View-Controller) paradigm that was popularized by Ruby on Rails. I took advantage of ASP.NET MVC’s ability to have more than one view for an URL or route. I modified the routing engine to inspect the incoming request and determine if it is a mobile device and what kind of device or a desktop browser and route the request to the appropriate view. Here is where the development time savings comes into play. Even though I have many views, they basically use the same back-end logic to feed data to the views. So for an example when you search for a contact I have written the contact search logic within my controller. I only have one controller for my mobile and desktop views so I only write the logic once and am able to support many different views.

Capture2 In this picture you can see that I have one ContactController and currently eight views using it for logic, four mobile views and four desktop views.

Now on to some glamour shots… Currently in my prototype I am focusing on mobile WebKit based mobile browsers currently used the iPhone, Android, and Palm Pre phones. I plan to create more mobile views for Blackberry and Windows Mobile 6.5 browsers and a fall-back mobile view that will be for everything else.

Capture4 Capture5 Capture6
iPhone Android Palm Pre

On the desktop I am developing for the three major browsers, IE 7 and later, Firefox, and Webkit (Safari and Chrome).


For iPhone users you get the ability to use TempWorks Mobile as a full screen application and have the ability to launch from your home screen.

Capture7 Capture3
Mobile icon on the home page Notice no address or status bars

Overall I have been impressed with the technical ability of ASP.NET MVC. It does have one downfall for me; the introduction of spaghetti code back into your HTML pages. You old school ASP folks and PHP developers probably don't mind that but after using WebForms for almost 10 years it takes a bit of getting used to again. Although I will admit it's nice getting back to HTTP bare metal again and not dealing with WebForm's idiosyncrasies.

WebCenter 12.8.0 Release Notes

Additions from 5.2.13 to 12.8.0 (Adopted TempWorks database version numbering system)


  • New data access layer.
  • The preferred work locations field on the personalinfo page can now be either a text box or a list box.


  • New data access layer.


  • New data access layer.
  • Added a new sitesettings configuration for setting the preferred work locations field type on the personalinfo page.
  • Updated our customer invoice details page to work with Enterprise’s new timecard document image linking.

Bug Fixes from 5.2.13 to 12.8.0


  • Fixed a bug in the RSS feed with orders having an ‘&’ in the job description field.


  • Corrected grammar in the w2 opt in disclosure.

Deployment scenarios for Enterprise

Today we have a few choices to get business software on someone’s computer. There are web applications like Google Gmail, Rich Internet Applications (RIA) that use Flash or Silverlight and there are Windows applications like Microsoft Outlook. We determined years ago that a Web 2.0 like web application wasn’t going to cut it. We spent a few years working on a “WebClient” for TempWorks and decided that there were too many roadblocks to achieve the vision we had for a client. Those were the days of early AJAX, the browser wars became stagnant and everyone was running IE6. We also passed on RIAs later on because their browser sandbox introduced a lot of the same roadblocks we had during our WebClient development. That meant a Windows application but we liked the easy deployment of a web application. We could of fell back into easy mode and required Windows Terminal Services (TS) or Citrix for all our remote installations but that opens another can of issues. If you’re a sysadmin I don’t have to list them, you all know them well. For others not familiar here are the nightmares of any TS/Citrix administrator; printing, local drive access, and more recently USB devices. There is a rich third party market to compensate for these shortcomings but they drive up the cost of already expensive TS/Citrix licenses and that make our product more expensive to the customer.

Yes, Virginia, there is another way. I had been long aware of a technology from Microsoft called remoting. It was kind of a sibling to web services, which was another possible choice to use. Remoting was lighter weight but used non-standard TCP protocols. I’d played with it and liked it but was very finicky to get it to work “right”. Web services offered standard HTTP/S protocols but was bloated. We needed something that was lightweight and fast like remoting but used standard protocols so we could cross firewalls without special configurations. Enter WCF. I first saw WCF at Microsoft’s PDC show in 2005 and loved it instantly. It offered exactly what I was looking for, lightweight and firewall friendly.

With WCF in our toolkit we had a way for our application to communicate across the Internet. Now I needed a way to deploy our application without too much hassle for the end-user. Microsoft had that answered for us as well. ClickOnce was a technology that allows .NET developed applications to be installed by a non-administrative user just by clicking on a web link (anchor tag for you web folks). Users just visit a web page and click a regular web link and ClickOnce installs all the files it needs, creates the shortcuts for the user and launches the application. It also allows in-place upgrading so when new versions of the application come out it will automatically download and install the update.

So today we have a Windows application that has many more features then any web application will ever have like USB access (think scanners and other capture devices) and local program access (think MS Outlook syncing) but can be installed locally with very little effort and work effortlessly across the Internet. There is a downside to all greatness. We use a lot of cutting-edge technology and unfortunately a lot of business computers are not the latest generation hardware or the end-user has poor Internet speeds. In these special cases we fall back to using Windows Terminal Services. I foresee in a few more years the hardware cycle will catch up with our technology as will broadband speeds. When this happens the less we have to fallback to TS and for me, that is something I look forward to.

If you want to try out our latest and greatest technology, visit our free download center here.


WebCenter 5.2.13 Release Notes

Additions from 5.2.12 to 5.2.13


  • Drop Downs are now valid options for answering questions on the questionnaire and agreement pages of the application.


  • Administrators can now create questions that have drop downs as available answers.

Bug Fixes from 5.2.12 to 5.2.13


  • Added logging to the new WebCenter notification tables for better error tracking.

Why do SMS messages require three Visual Studio debugging sessions…

I had to blog about the fact I am currently watching three active Visual Studio debugging sessions on my workstation. Also it gives me an opportunity to describe a little bit about our new product release. About a month ago one of customers asked us if we could come up with a way to use text messaging for one of their processes. Their process of tracking employee availability requires their employees to call in once a day so a rep can mark them as available. The customer had the brainstorm of allowing employees to send a text message instead of calling they could save a lot of the employee’s and rep’s time.

Project scope in hand I knew the scenario would be simple to code for the customer. I wanted to take the project further. The idea of being able to support sending a message to an employee and getting their reply crossed my mind. Other folks around the office had other ideas for text message uses like reporting hours worked and assignment information. I also wanted it to be simple for our customers to implement regardless of their version of TempWorks. I came up with a three tier system. The heart of the system is our SMS gateway server we maintain here at TempWorks. The SMS gateway routes all messages both incoming and outgoing for all customers. With this set up our customers only need to deal with us and not worry about having to establishing accounts with SMS providers. Tiers two and three are installed on the customer side. Tier two is a basic web service interface to receive messages from the SMS gateway. Tier three is our SQLCLR assembly that takes care of sending messages to the SMS gateway.

Debugging these projects isn’t difficult, it's almost a little fun watching messages flow thru the three debugging screens. Since the SMS gateway will be routing all SMS messages back and forth I spent considerable time trying to make all my code multithread compatible. It’s amazing how easy Visual Studio 2008 make this. From whiteboard to the customer demo today, it only took me about 32 man-hours.


WebCenter 5.2.12 Release Notes

New features from 5.2.11 to 5.2.12


  • All NS inserts have been replaced with the new Notification system.
  • The Sovren resume parser has been updated to the latest version.


  • All NS inserts have been replaced with the new Notification system.


  • All NS inserts have been replaced with the new Notification system.
  • Added compatibility for IE8.
  • Added a new page to the Administrative control panel to create/edit Notification templates.
  • Added a new page to the Administrative control panel to manage the Notification subscriptions for all users.
  • The “My Settings” page has been reworked to work with the new Notification system.
  • All Telerik controls has been updated to the latest version.

Major bug fixes from 5.2.11 to 5.2.12


  • The state drop down and the address are now displayed correctly when a new employment record is created.
  • The resume section now passes the BranchID to the procedures and not the BranchName.


  • Fix the cost code and pay code drop downs on the timecard time entry pages to allow for text entry.
  • Fix when the “Add Timecard” button should be displayed on the timecard time entry page.

Engineering four 9s

Four 9s is a term thrown around in IT circles by people who like to gloat about all the expensive equipment they have purchased. For TempWorks it means that we guarantee that our hosting services are available for use by our customers 99.99% of the year. Four 9s boils down to about 53 minutes of unplanned downtime for a given year. A footnote to our guarantee is that it does not include planned downtime for system maintenance.

Personally I am a pessimist, at least that is what my wife tells me. I like calling it being overly cautious and always expect bad things to happen. Okay, same thing. Over my years here at TempWorks I have overseen the progression of our hosting services division grow from a three computers to today where we have over 60 servers. In the beginning we didn’t have the budget to worry about uptime and redundancy. If a computer went down, the Internet or power fizzled out our customers went offline. Luckily that didn’t happen too often. Maybe a few times a year mostly because of our buildings notorious power supply conditions. We’ll just say I didn’t enjoy the coming spring/summer storm seasons.

Taken circa 2002, our entire hosting solution then took up about 8Us in that rack I am standing next to.

Through the years to today we have tried different approaches to redundancy and achieving four 9s. I am not going to document every attempt we tried in the past. I wanted to keep this post short and too the point. Our efforts can be broken down into three bullet points.


I get teased quite a bit around the office about buying two of everything. I can take it, it makes me sleep soundly at night. We look at every failure point and doubled up on it. Multiple data centers, multiple Internet connections from separate ISPs, multiple firewalls in a high availability configuration, multiple RDP/ICA load balancers, multiple web farms for our web based products, and finally, multiple mirrored SQL Servers installations including our production 2-node fail-over cluster.

Power Stability:

As I mentioned before power for our building was iffy on the best days, brown-outs were a constant. Several years we made the decision to invest some serious capital to fix the situation. We ended up installing a solution from APC that included our own power generator and direct power feed from the electrical grid. The only thing I need to worry about in a power-outage is the ability to keep the generator filled with diesel and yes, I do have emergency refilling contracts. Now I look forward to storm season. The power goes out for the building but our data center hums along like nothing happened. I also have distributed a few protected power outlets through our office suite to a few key offices that would need to keep operational, like our payroll processing division.

Internet Stability:

I can never claim that we have Internet redundancy figured out. This issue has been the most troublesome to get right. The solution we have in place now seems sound and has been working well since we brought it online last summer. Currently we have a fiber tap on the Metropolitan Optical Ethernet loop as does our ISP. That is our main pipe that terminates into our Cisco 7200. For redundancy we have two bridged T1s terminated at our Cisco 7200 and at another ISP in a different city. We’re using BGP routing between the two ISPs. Our secondary data center is on a different pipe altogether and we keep another T1 active at another location just incase we need an emergency pipe to the Internet and all else has failed.

IMG_0890 IMG_0926  
Some of our routers and firewalls
One of two rows of hosting servers and UPS modules. Also me holding an award from APC for spending the most money that year. Actually it was an uptime certification.
The shiny new generator

Four 9s is hard and isn’t cheap. Its pretty much two or more of everything and takes about four times the man hours to get it working right. It’s a never ending battle to get unanticipated downtime to nil. I am sure as time goes on we will be tweaking our set up. As it is now it works and works great. We haven’t had an outage in about 9 months and that downtime was only about 20 minutes. We were still trying to get our BGP routes right :)

Enterprise 12r7 Release Notes

New Features

  • New dynamic searching engine.
  • End-users can create their own custom searches and share them.
  • Enhanced system for managing customer default values and rate sheets.
  • Enhanced Resume Parser engine.
  • Required documents management.
  • Invoice merging enhancements.
  • Evaluation management.
  • Improved test score management.
  • Grids can now be customized on a per user basis.
  • End-users can now manage their own drop-down value lists without the need to call support.
  • Deep integration with Trak-1 Background Screening. No more double data entry and real-time alerts on background report documents.

  • Many bug fixes since Enterprise 12r6…

See the product page here.

WPF and Terminal Services, a mix not made in heaven

When we first architected Enterprise Terminal Services performance wasn’t at the top of our priority list. Actually one our goals was to get rid of the requirement of our software to run Terminal Services in a distributed environment. It sucks to have to pay MS license fees twice. Once for the Windows CAL and then for the TS CAL. I spent a lot of time researching a distributed framework to build Enterprise on. Luckily I was fortunate to see Rocky Lhotka at a local event and heard about his CSLA framework. It fulfilled all our requirements and after reading his book I was sold. Fast forward a few years and we have a desktop client that can run locally and talk to a remote data store using not much more bandwidth than a similar browser application. Plus we get the benefit of a full trust application on the computer. So yes, we have a distributed application that run across the Internet with pretty good speed and we find ourselves still needing Terminal Services sometimes. The one factor I didn’t account for, and shame on me for being in this industry over 15 years and not seeing it, is companies like to run really old hardware.

We designed Enterprise using the latest and greatest from Microsoft. Sometimes called the bleeding edge of technology. Now for developers that is great because you get to learn about stuff before it gets mainstream and old school. The problem is that our new fangled WPF UI application has some moderately hefty hardware requirements. Of course as time goes on we have Moore’s Law and business equipment upgrade cycles. Sometimes though they don’t kick in soon enough for our liking. Companies run Terminal Services to get a few more years out of their user’s desktop or the user doesn’t even have a full desktop and runs a WinTerm. Each company has their agenda and we as a company have to adapt our software to run in our market space. As Enterprise rolled out the door and into customers hands we began to learn that users sometimes didn’t have the horsepower to Enterprise and all the WPF goodness we put into it. Now I am not saying you need $5,000 gaming machine to get decent performance but something with moderately reasonable specs. In today’s hardware spending $400 at Best Buy will get you a machine that will run Enterprise pretty well. But when you’re dealing with a company that hasn’t upgraded their user machines in the last 5 to 6 years you might as well try to be running Enterprise on an Intel 486. So we have come full circle and have to deal with Terminal Services again.

Late last year Aaron was tasked with redesigning our UI and take Terminal Services into account. This meant detecting when we’re running on Terminal Services and removing all our transitions. Transitions are a bad thing when it comes to Terminal Services because as we fade and side UI elements around Terminal Services needs to repaint the screen on the client end. You then end up with screen tearing and horrible lag because Terminal Services is try to catch up as you slid that panel neatly out of the way. By getting rid of all the WPF eye candy we end up with a snappy UI. We did all this work and were proud of ourselves until about month ago. Enterprise was being installed at a customer that was self-hosted and running Terminal Services. At the time we thought no big deal because we knew we had Enterprise tuned for Terminal Services sessions. What we didn’t anticipate was their Terminal Services hardware adequate.

Back in the Access days of TempWorks we would recommend  a two CPU machine with at least 2Gb of RAM. That was usually enough to host about 25 users. When we installed Enterprise on a machine with this config we could only get 5 to 10 users comfortably on. After some pondering I came to the conclusion that we were CPU bound. Well, that was easy to figure out because the CPUs maxed at 100% utilization when we approached 10 users, the question was why. Normally Enterprise doesn’t use much more CPU cycles then any other business application. We do steal a few more then necessary when we do our fancy WPF transitions but nothing too off the scale. But in Terminal Services all those transitions are turned off so that wasn’t it. We came to the conclusion that it was a combination of issues. First of all Enterprise is very multithreaded. Almost everything we do is fired off in it own thread so the UI doesn’t freeze. Data coming and going is threaded off. Not a big deal when you’re the only person on a dual or quad core CPU machine but a big deal when you’re sharing an old single core CPU server with many users. Also compound the fact that WPF is using software/CPU rendering because you don’t have access to a GPU in a Terminal Services session. So the CPUs were maxed out trying to keep up with all the threads but also trying to draw Enterprise.

To test the theory we went out and purchased a new Mac Pro desktop with 2 Quad Core Nehalem Xeons at 2.2 GHz. Since Nehalem's are Hyperthreaded we had 16 cores at our disposal. We also installed 10Gb of RAM to make sure we didn’t run out of memory and threw on Windows Server 2008 R2 beta running Terminal Services. Yes I know, a Mac running Windows. Anyway, to test I filled our training room with volunteers and had a few developers at their desktop all log in to this machine and start using Enterprise. I asked all the users to search, open records and navigate to different forms. I was relieved as I watched the server’s resource monitor. The server hardly broke a sweat. We had about 30 users logged in and 50 instances of Enterprise actively being used and the CPUs never broke 50% utilization. Better yet the CPU frequency hovered around 75% meaning the CPUs weren’t even at full power. Not bad for a $3,500 desktop.

I can rest easy again knowing Enterprise runs very well on Terminal Services as long as you have current hardware. To me that is fine because it is easier to convince a company to upgrade their few servers as opposed to upgrading dozens and dozens of desktops.



Coming up shortly in the development of Enterprise are a few milestones that have gotten me to stop and ponder. Normally I am not very nostalgic about such things but they seem to be piling up upon each other lately. First we have another version of Enterprise coming out, 12r7. Not a big deal to me, just another feature set and deadline. Actually it is big deal but I’ll let our marketing department hype it up. Second is more of a developer geeky thing, our 10,000th code check-in is fast approaching. Another few days and we’ll hit the magic number. Maybe I’ll give away a Best Buy gift card to the lucky developer. Third is the one that hits me the most. This June will be the third anniversary of Enterprise. It became a glimmer in our eyes during the Microsoft PDC show in Oct. 2005. We were excited by the new technology .NET 3.0 was enabling us with. WPF caught our eye and we haven’t looked back. So with me feeling nostalgic I thought I might dig up some old media I have collected over the years of Enterprise development.

To begin our trip, here is the first project spec of what we set out to achieve in Enterprise.

ledger sheets


Paper based solution

A miracle happens…

TempWorks Enterprise

It was more than three steps but I like to tell that story. To begin with we had to sell the idea of what we wanted to do with the rest of the company. We knew we needed to get off the Access development platform and this was our chance. Actually it was our third chance but that is another story for another time.

From January 2006 to June 2006 a few of us developers went into seclusion and worked on our “Skunk Works” project. Nobody knew what we were up to and strangely not many people asked what we working on. Truthfully, I think people get sick of asking what were doing because we would launch into some C# mumble jumble, they get that glazed look in their eyes and then walk away.

In June 2006 we launched Enterprise (then code named “FX”) upon the employees of TempWorks. It was kind of a fun event where we got everyone together, talked about some new products we were working on like DocCenter and TwMobile. Then we launched this video on a huge projector screen.

After the video introduction we gave out TempWorks FX tee shirts to everyone. We on the FX dev team made it a big event for the company. Heck, at least people got a free lunch out of the deal. At the time I wanted to make a big deal out of what we had done. I thought it as a paradigm shift in our company’s development efforts. Looking back at it now, it was.

To continue our trip thru history, here are some screen shots of the evolution of Enterprise.

The prototype you see in the video:

Internal working version for 2006/2007:

Version shown at Staffing World 2007:

The dark ages (early 2008):

Current version, debuted at Staffing World 2008:

Anyway, here we are today, three years of development, 10,000 code check-ins, and 7 releases later. Time does fly.

Thank you Aaron Nottestad, Matt Sonnenberg, Jeff Bradford, Jason McCord, Eric Anderson, and Eric Rodewald for your efforts on developing Enterprise.