Angular Array Filter Framework

We are currently working on an Angular application that has many different types of domain specific objects all of which have a common set of base attributes, “isDeleted” and “title” to name a few.

[code language=”javascript”]
export interface EntityDomainModelBase {
code: string;
title?: string;
isDeleted?: boolean;
}
[/code]

There are a few areas within the application that will need to filter arrays of the domain specific objects by both some domain specific attributes and also by the base attributes.  A base Angular “pipe” class was created to encapsulate the plumbing of the filter and the filter logic for the common base attributes, along with the delegation to the domain specific subclass.

[code language=”javascript”]
import { PipeTransform } from ‘@angular/core’;

import { EntityDomainModelBase } from ‘./api/DomainModelBase’;

export class DomainModelArrayFilterPipeContext {
viewDeleted: boolean;
searchTerm: string;
}

export class DomainModelArrayFilterPipe<typeOfEntity extends EntityDomainModelBase>
implements PipeTransform
{
transform(entities: typeOfEntity[], ctx: DomainModelArrayFilterPipeContext) {

if (!ctx)
return entities;
if (!entities || !entities.length)
return entities;

return entities.filter((item: typeOfEntity) => {
return (ctx.viewDeleted == undefined || item.isDeleted == ctx.viewDeleted)
&& (ctx.searchTerm == undefined || ctx.searchTerm == ”
|| item.title.toLowerCase().indexOf(ctx.searchTerm.toLowerCase()) >= 0
|| this.checkTerm(item, ctx.searchTerm)
);
});
}

/*
implement in domain specific filter class to filter on domain specific attributes
*/
checkTerm(item: typeOfEntity, searchTerm: string): boolean { return true; }
}
[/code]

A domain specific subclass was then created and used like any other “@pipe” attributed class. This domain specific pipe implementation also filters through an array of persons for a specific name.

[code language=”javascript”]
@Pipe({ name: ‘domainSpecificObjectFilter’ })
export class DomainSpecificFilterPipe extends DomainModelArrayFilterPipe<SomeDomainModel> {
checkTerm(item: SomeDomainModel, searchTerm: string): boolean {
return item.persons.filter(p => p.name.toLowerCase().indexOf(searchTerm.toLowerCase()) >= 0).length > 0;
}
}
[/code]

The filter classes above are then used to filter a list of domain specific objects and draw the appropriate HTML.

[code language=”html”]
<ng-container *ngFor="let event of (events | eventsFilter:filterContext)">
[/code]

With this design any number of domain specific filtering scenarios can be addressed without having to duplicate the common argument checking and array filtering logic.

Angular Dirty Tracking of Object Graphs

I am currently working on a CRUD app in Angular and needed to implement dirty tracking of an object graph.  All the examples I have found work well on the “input” properties to a component, or for simple objects using the “KeyValueDiffer”.

I started implementing the logic based upon this post Angular Change Detection but As soon as I tried to edit a collection or property of the root object I would need to do traverse the nested objects and arrays.

I then decided to keep it simple and just compare JSON strings of the original object and any changes that occur to it during the edit cycle.  This approach would allow the user to change a property back to it’s initial state using normal editing procedures.  Editing an “input” and then just changing the text back would reset the dirty, adding a collection item then removing it after add would reset the dirty, etc.

[code language=”javascript”]
updateDirty(): boolean {

var currentJson: string = JSON.stringify(this.entity);
this.isDirty = currentJson != this.originalJson;

return this.isDirty;
}

editStart(entity: typeOfEntity) {
this.entity = entity;
this.isDirty = false;

this.originalJson = JSON.stringify(entity);
}
[/code]

Angular Constants Injected Server Side from ASP.NET CSHTML

Most answers on the internet work well for JIT compilation, refer to the stack overflow article here;  Stack Overflow Question

We are using JIT compilation for development and AOT for deployment. I couldn’t find a good way to make the standard approach work with AOT. For reference I followed the AOT cookbook here… Angular AOT cookbook.

The approach below will not work for server side rendering but should suffice for both JIT and AOT client side rendering.

1) In the razor view, _layout.cshtml for example, just put a script block and set a JSON object on the window interface. I placed this block within the “head” tag but doesn’t really matter. The values for the JSON keys can be determined by any razor syntax.

[code language=”javascript”]
window.appContext = {
userName: ‘@("test".ToUpper())’,
isAdmin: @(1 == 1 ? "true" : "false")
};
[/code]

2) Create an app context service and wire up in appropriate module and inject into components for usage, if you need help with this just comment and I’ll supply more info.

[code language=”javascript”]
import { Injectable } from ‘@angular/core’;

/*
must match definition in SCRIPT tag that is seeded from razor
*/
interface IAppContext {
userName: string;

isAdmin: boolean;
}

/*

*/
@Injectable()
export class AppContextService implements IAppContext {

userName: string;

isAdmin: boolean;

constructor() {
var appContextBootstrap: IAppContext = (<IAppContext>(<any>window).appContext);

this.userName = appContextBootstrap.userName;
this.isAdmin = appContextBootstrap.isAdmin;
}

}
[/code]

3) reference and use the app context service throughout the application.

[code language=”javascript”]
import { Component } from ‘@angular/core’;

import { AppContextService } from ‘./appContext.service’

@Component({
moduleId: module.id,
selector: ‘story-app’,
templateUrl: ‘app.component.html’
})
export class AppComponent {
AppConfig: any;
constructor(private appContext: AppContextService) { }

ngOnInit() {

alert(‘hi – ‘ + this.appContext.userName);
}
}
[/code]

SQL Server Partitioning using Multiple Varying Date Ranges

Overview

There are numerous articles about best practice partitioning approaches for SQL Server, and they essentially deal with a similar scenario – continuous population of historical data where a date column is coupled to the data’s relevant lifespan.   The partitioning scheme uses this date as the partitioning column, and the oldest partition is archived out of the primary table on regular time intervals (usually via the ALTER TABLE … SWITCH statement into a staging table).

But what approach is best when the historical data is a mixed bag – where some data needs to persist indefinitely (regardless of its origin date) and the rest fits into the more traditional approach described above?  This article describes a solution for this business problem.

Scenario

A car service has an audit database that stores years of the before and after values of a subset of its OLTP tables.  Not surprisingly, the large data volumes in the audit database’s tables require partitioning to allow for easier management of data files and backups.

Included in the OLTP subset are tables containing timeless business data like customers (name, address, contact info, etc…) and drivers (name, availability, etc…), and tables containing expirable data like data related to a specific trip including pickup location, duration, and flight arrival or departure information.

Each change in the OLTP database becomes an audit header row identifying the table and change date and child detail rows with the before and after values of the columns updated in transaction.

A user of the archive’s reporting system would need to see the history of customer or driver changes across all time, but only need to see trip data for trip’s that occurred within the last six months.

Approach

A “standard” partition function and scheme on the change date works for the “expiring” data but not for the timeless data.  What is needed is an abstraction layer to calculate a partition from the source table and change date.

Database Objects

These are the abbreviated scripts and sample data of the primary database objects which implement this solution.  Detailed descriptions follow.

[code language=”sql”]
CREATE TABLE [Partition](
[PartitionNumber] [smallint] NOT NULL,
[IsArchived] [bit] NOT NULL DEFAULT ((0)),
[IsReadOnly] [bit] NOT NULL DEFAULT ((0)),
[FinalBackupTaken] [bit] NOT NULL DEFAULT ((0))
)
[/code]

[code language=”sql”]
CREATE TABLE [PartitionMap](
[SourceTableId] [smallint] NOT NULL,
[StartDate] [date] NOT NULL,
[EndDateExclusive] [date] NOT NULL,
[PartitionNumber] [smallint] NOT NULL
)
[/code]

[code language=”sql”]
CREATE PARTITION FUNCTION [pfAuditHistory](smallint) AS RANGE RIGHT FOR VALUES (1, 2, 3, 4, 5, …)

CREATE PARTITION SCHEME [psAuditHistory] AS PARTITION [pfAuditHistory] TO ([fg00000], [fg00001], [fg00002], [fg00003], [fg00004], [fg00005], …)

CREATE TABLE [HistChangeHeader](
[SourceTableId] [smallint] NOT NULL,
[AuditDateTime] [datetime] NOT NULL,
[PartitionNumber] [smallint] NOT NULL
) ON [psAuditHistory]([PartitionNumber])&amp;lt;/span&amp;gt;

CREATE TABLE [HistChangeDetail](
[HeaderId] [bigint] NOT NULL,
[PropertyNameId] [smallint] NOT NULL,
[BeforeValue] [varchar](max) NULL,
[AfterValue] [varchar](max) NULL,
[PartitionNumber] [smallint] NOT NULL
) ON [psAuditHistory]([PartitionNumber])
[/code]

The Partition table persists the state of a given partition.  Partition 1, which is our timeless partition, is never archived or read-only, and its “final backup” will never be taken since it is constantly being added to by the timeless audit data.  All other partitions are bound to month values (how this is done will be explained shortly).  For example, partitions 2-5 represent data for months in the year 2009, and partitions 100-101 represent 3/2017 and 4/2017 respectively.  

Stored procedures drive all the partition operations through this table so that the physical state of the partition (and corresponding filegroup) is always in sync with the Partition table.  

  • Manage ReadOnly –  look for any rows in Partition where ReadOnly = 0 and EndDateExclusive has passed and alter the partition’s filegroup to read-only and the ReadOnly bit in the Partition table to 1.  
  • Manage Backup –  manages the backup of each partition (via BACKUP DATABASE … FILEGROUP=xxx) and sets the FinalBackupTaken bit to 1 if ReadOnly is set to 1 and FinalBackupTaken = 0.  This means that the final backup for this partition is complete and no additional backups need to be taken.  
  • Manage Archive – Once FinalBackupTaken = 1 and the data is no longer needed for reporting (for example, data older than 5 years), the partition can be migrated (via ALTER TABLE … SWITCH) to a staging table and then bcp’ed out of the database.   The export can be moved to cheap storage if it ever needs to be imported back into the database which is assumed to be a rare if ever occurrence.

The PartitionMap table defines the mapping from table and date to its partition.  When changes are sent to the audit database, a stored procedure is invoked with the source table and change date.  This procedure looks for a matching entry in the PartitionMap table, obtains the PartitionNumber and uses it for the insert into the header and detail rows.  In the sample data above, tables 2 & 3 are timeless, and use a distant end date so that for the foreseeable future all data for these tables will be assigned to partition 1.  Tables 5 & 6 are partitioned by month which allows for older data for these tables to be marked read-only and eventually archived.

Each partition is assigned to a filegroup which in this scenario contains a single file of 10GB.  This was calculated via analysis of the already present historical data.   There were a few outlier historical months which needed a slightly larger file size, this will obviously be a function of your unique data requirements.

The header and detail tables use the PartitionNumber column as their partitioning column and all indexes created for these tables are aligned with the same partitioning scheme.  This was primarily done to allow for the use of ALTER TABLE … SWITCH for easy archiving of a partition’s data, but also allows for partition elimination in queries against this data.  For example, once the header rows are found via their business key (e.g. trip number), we can use partition elimination for the join to the detail table which is several times larger than the header table.  Having to look across the header’s partitions has not negatively impacted performance, and the scheduled management of archiving off older partitions can mitigate against potential issues.

Summary

SQL Server partitioning is a simple but extremely powerful feature for managing your physical data.  But like most implementations in the real world, the “standard” scenarios sometimes fall short of your goals.

Hopefully, this helps with your own creative approaches of how to extend partitioning to fit your business needs.

Linq to SQL DBML TFS Merge Strategies

I know linq is a somewhat outdated framework but it’s super simple to use and still gets the job done for us. It’s been used for several years on our current project and we have always run into some issues when merging between 2 branches. The DBML and designer files would get all out of wack during the merge process and would require a lot of tweaking to make correct.

We created a project called “DBML Organizer” that we would run on the source .DBML and .layout files that would sort the XML alphabetically. From there the TFS merge logic would handle pretty much all cases.

The following code can be copied and pasted and used as is.

[code language=”csharp”]
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Xml.Linq;

namespace DbmlOrganizer
{
class Program
{
static void Main(string[] args)
{
var path = args.Length > 0 ? args[0] : @"path to some folder with the .dbml files";

Directory.GetFiles(path, "*.dbml", SearchOption.TopDirectoryOnly).ToList().ForEach(f => FormatDbml(f));
Directory.GetFiles(path, "*.dbml.layout", SearchOption.TopDirectoryOnly).ToList().ForEach(f => FormatDbmlLayout(f));

Console.WriteLine("any key to continue");
Console.ReadKey();
}

private static void FormatDbmlLayout(string layoutFile)
{
var xml = XElement.Load(layoutFile);

var classShapes = (from cs in xml.Descendants("classShape")
orderby cs.Descendants("DataClassMoniker").First().Attribute("Name").Value
select cs).ToList();
classShapes.ForEach(cs => cs.Remove());

var nestedChildShapes = xml.Descendants("nestedChildShapes").FirstOrDefault();
if (nestedChildShapes == null)
return;

nestedChildShapes.Add(classShapes);

if (!File.GetAttributes(layoutFile).HasFlag(FileAttributes.ReadOnly))
{
xml.Save(layoutFile, SaveOptions.None);
Console.WriteLine("Processed Layout- " + layoutFile);
}
else
{
Console.WriteLine("Layout ignored because of read-only- " + layoutFile);
}
}

private static void FormatDbml(string dbmlFile)
{
Action<XNamespace, XElement> tableFormatter = (ns2, tableXml) => {
var type = tableXml.Descendants(ns2 + "Type").FirstOrDefault();

var columns = SortXml(ns2, tableXml, "Column", "Name").ToList();
type.Add(columns);
};

XNamespace ns = "http://schemas.microsoft.com/linqtosql/dbml/2007";

var xml = XElement.Load(dbmlFile);

var connection = xml.Descendants(ns + "Connection").FirstOrDefault();

var tables = SortXml(ns, xml, "Table", "Name").ToList();
tables.ForEach(t => tableFormatter(ns, t));

var functions = SortXml(ns, xml, "Function", "Name").ToList();

// don’t change the order of the function params, etc.
//functions.ForEach(f => FormatDbmlFunction(ns, f));

connection.AddAfterSelf(functions);
connection.AddAfterSelf(tables);

if (!File.GetAttributes(dbmlFile).HasFlag(FileAttributes.ReadOnly))
{
xml.Save(dbmlFile, SaveOptions.None);
Console.WriteLine("Processed DBML – " + dbmlFile);
}
else
{
Console.WriteLine("DBML ignored because of read-only- " + dbmlFile);
}
}

private static IEnumerable<XElement> SortXml(XNamespace ns, XElement parent, string elementName, string attributeName)
{
var elements = (from c in parent.Descendants(ns + elementName) orderby c.Attribute(attributeName).Value ascending select c).ToList();
elements.ForEach(e => e.Remove());
return elements;
}

}
}
[/code]