
SelectedProjects List
Teams Quality Dashboard
Understanding adoption and issues with Microsoft Teams
An internal offering developed across multiple project deliveries. The default Teams adoption dashboard provided by Microsoft only offers 30 days of historical quality rating data via API. This project involved regular loading of data into a Lakehouse of delta-parquet tables and flexible network and organizational definitions derived from MS Graph to identify trends in quality and adoption over extended periods of time and optionally mapped to quality improvement program datasets to determine efficacy of improvement projects. The internal work involved development of sales enablement materials including data sheets, rude QA, and selling template materials, as well as delivery enablement materials including project templates, deployment templates, and team training materials.
Fabric Proof of Concept
Next-Gen Data Estates
A client project. Deployment of a modernized data estate for a client whose current estate centered on a 20-year-old Report Builder 3.0 implementation and use of paginated reports. Fabric was used to connect to existing ALDSG2 Data Lake assets, re-use of SSIS ETL tooling in Synapse and development of a KPI report model to help facilitate the business’ understanding of modern data-driven decisioning with drill down into familiar paginated reporting.
Azure Synapse Build-With model
Big Data != Big Bucks with Azure Synapse
A very small organization providing insurance for elderly care institutions needed an evolved data estate to better understand their cost to profit for services, and rating of client institutions for premium charge calculation. The organization did not staff data engineers and was not in a position to afford a complete estate deployment. We leveraged our technical delivery excellence practice training methodology in a paired programming environment to teach the client staff how to work with Synapse, and how to build effective Inman-Kimball data warehousing architecture in an intensive six-week engagement.
Predictive Modelling Experiments
Auto ML in modern data estates
The goal of this client Proof of Concept was to determine if ML could be used to determine when certain equipment should be sent for preventative maintenance. The client experiences significant downtime with preventative maintenance to maintain a solid safety rating for their projects but wanted to reduce the overall downtime and cost burden of the activity. Device repair and utilization history data was loaded into a standard machine leaning model hosted in Azure ML Studio and using AutoML for curve fitting across multiple iterations of feature experiments. The short project was able to achieve an 84% accuracy rating for indication of when preventative maintenance should be scheduled.
Recurrent Neural Networks
Machine Learning using commonly available NVidia GPUs
This Self Study was planned as a learning tool to understand the implementation of neural networks using Nvidia AI Toolkit, (now part of the Nvidia cuDNN Deep Neural Network API). The goal was to develop a proposal for use of this tooling by a leading insurance organization with an actuary department skilled at the use of Excel and Mote Carlo simulation for product actuary. The result was an Excel wrapper add-in for Excel which ran on CUDA-equipped devices (GPUs) accessed via the toolkit and a C++ CLI wrapper. The characteristics of the platform was evaluated against open quote engines in multiple fields. The use of a recurrent network was shown to be valid for storage historical domain vectors for evaluation of new inquiries. Preliminary results indicated a 14% lower average product quote with ~89% test accuracy, deliverable for authorization in less than a week, where prior methods took many months of testing and approval to produce.
Modern "NPU" architectures such as Copilot+PC and other major hardware venders are a successor to the groundwork laid down by the NVidia CUDA Toolkit many years ago, paving the way for highly personalized AI assistants in the near future.
Nightingale
Observable home care solutions for health and well-being.
Named after Florence Nightingale, this expert system is a research project to acquire input from personal medical signals that measure PulseOx, Blood glucose, Heartrate, blood pressure, BMI, bodyweight, and EEG brain activity, sending this information into a machine learning model to predict potential ICD-10 medical conditions that the user may be experiencing. Nightingale is a research project and not intended for medical use.
Cortana3
Deeply personalized assistants
A voice-interactive AI chat providing long-term memory, LLM completions and dynamic inference for integration of expert system plug-in modules. This project involves NLP controlled RLHF, the ability to create and re-use NLP Plugins, and the ability for non-technical users to create plugins from Open API (Swagger) specifications.
Free VectorSpace
Reinforcement Learning through Human Feedback, humanized.
A research project designed to allow virtual agent users to dynamically create their own vector space database domains and include them in a custom, personalized expert system.
Rapid Development for AI
Cost Effective. Game Changing AI
A development model to rapidly prototype and deploy new intelligent agents, including HuggingFace models, localized vector database and embeddings, both commercial (OpenAI) local (disconnected) large language models.
Agency Attestation
Securing Agent-Agent Collaboration
The ability for a user’s identity to be protected while using external services, providing an “air gap” between the actual user and personal credentials and the credentials used to interact with external systems. Physical implementation was a local secure ticketing service (STS) providing dynamic claims shared between agent components of an NLP system.

Edge messaging services management system
Building Bridges at the advent of the internet
In the early days of networking, internet-based email was very new, while clients usually had groupware messaging systems within their environment. Lansoft services provided a gateway to allow transmission of email to external, internet SMTP based systems using internal conversion of messages. The system also provided border content filtering for multiple subnets and domains within an organization. The Lansoft Information System (LSIS) was developed to manage the deployment, operations and billing for all services provided by the company.
Retail Branch Banking Advisor’s Portal
Market-leading employee communications
A large national retail banking institution needed a means to communicate metrics and standard practices to their branch field organizations. Microsoft SharePoint Portal Services was selected for the project at a time when Microsoft had just purchased Navision for content management but had not completed implementation of the system in their SharePoint technology stack. During this engagement, liaison work between the business and the product group was needed to clearly communicate use cases that went beyond the capabilities of the product at that time, and development of a supporting architecture was needed to allow flexible implementation of digital campaigns and similar field engagement. A “Data Island” architecture was developed to work within the framework of the SharePoint web part “portlet” model, along with a retrieval API to load content and share relevant joining keys across elements of the page and site. These concepts were further developed by the product group into what is now known as managed metadata services and external lists for M365 SharePoint.
SAP on Azure
First ever client deployment of SAP on Azure
SAP and Microsoft’s strategic partnership at the time did not include support for deployment of SAP in an Azure data center. This project involved working with strategic alliance partners to work out technical details, as well as advocating on behalf of the client who was insistent upon running their new SAP ERP environment in the Azure Data Center. Today, SAP and Microsoft work together to continuously improve the solution that was developed due to this project.
TFS on Azure
First client Team Foundation Services on Azure leads the charge for Azure DevOps.
Prior to the release of Azure DevOps, the ability to provide globally accessible multi-party source control, deployment operation and work item tracking was very much in its infancy. I deployed such an environment using TFS and solving the needed credential mappings with federation and early implementations of OAuth. Working as a member of the Visual Studio Rangers team, I provided knowledge transfer to the product group which went on to vastly improve the solution with what is today Azure DevOps and GitHub integration.

Virtual Machine Fast Update 2 (VMFU2)
Customer-First problem solving with the Azure product group.
At the time the largest IOT user of Azure services, a client had many hundreds of thousands of IOT devices deployed in autos worldwide which would report status every 15 minutes. The client noticed a bottleneck in their Stream Analytics deployment and was very concerned with the ability of the environment to process data sets from the client devices. I worked with the client to deeply understand the problem and organize facilitation with multiple Azure teams to further diagnose and solve the problem. Working together, the Microsoft team deployed VMFU2 to help IOT devices maintain connection through regular VM patch updates to the platform.
As part of my role, I went on to facilitate regular reporting to the Azure product leadership of the client’s satisfaction with Microsoft products – which was especially important as this company’s success was deploy tied to the early work of Satya Nadella in his role within Azure. I am proud to have worked under the leadership of Jason Zander in this client success program that blazed the trail to a more customer-focused field engagement strategy for the entire company that is in place today.