Friday, December 19, 2008

Select random songs for MP3 player

I recently upgraded the firmware on my old Nano Plus MP3 player. As a result, the Windows Media Player, running under Server 2008, won't detect the device any more. Since I periodically toss around 50 random songs on my player for running purposes, and the fact that I'm too cheap to buy another compatible player, I wrote a quick-n-dirty lil' C# Windows Forms application to do this.

The application will simply select n random songs given a bunch of different input and output parameters. The UI looks like:



You have a few nice options like providing regular expression filters for including or excluding certain files. Whatever settings you use will be saved in an XML file in your application data folder.

You can download the C# solution by clicking here. It's not the best code, but hey, I needed it before a race tomorrow.

Tuesday, December 2, 2008

Avoiding Embedded T-SQL in Query String

I've seen folks trying to hack a web site by passing large amounts of hex-encoded T-SQL commands embedded withing a query string...and it's a bit disturbing. I like to be a bit proactive when it comes to this type of thing so I tend to use an a custom HTTP module that I register in my web.config. I write a class and place the file in my App_Code folder, then register it in my web.config like:

<httpModules>
<add name="RestrictionHttpModule" type="HttpModule.RestrictionHttpModule"/>
</httpModules>

The C# class looks something like this:

using System;
using System.Web;

namespace HttpModule
{
public class RestrictionHttpModule : IHttpModule
{
public RestrictionHttpModule()
{
}

#region IHttpModule Members

public void Dispose()
{
}

public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(Application_BeginRequest);
}

private void Application_BeginRequest(object source, EventArgs e)
{
HttpContext context = ((HttpApplication)source).Context;

// watch out for t-sql commands that may be embedded
string query = context.Request.Url.Query.ToLower();
if (!string.IsNullOrEmpty(query))
{
if (query.Contains(";declare") ||
query.Contains("exec(") ||
query.Contains("cast(") ||
query.Contains("convert("))
{
context.Response.StatusCode = 403; // forbidden
}
}
}

#endregion
}
}


This is just an example. My actual handler is a bit more robust. I include code that also allows me to reject remote host IPs, remote host names, and referrer names. Since my global exception handler email includes the full URL for any exceptions, it's easy for me to see which type of remote sites are attempting to hack, login without credentials, etc.

Tuesday, November 25, 2008

Making custom changes in LINQ-to-SQL designer

This post describes a scenario I encountered recently that would require making custom changes in the LINQ-to-SQL designer.

Even given that the Microsoft rumor-mill has tagged LINQ-to-SQL as terminal given the rise of LINQ-to-Entity, I still find LINQ-to-SQL useful for those non-enterprise solutions. As many developers know, the LINQ-to-SQL class designer doesn’t provide much automated support for keeping your database schema up-to-date with the class designer objects (tables, relationships, etc.)

I haven’t found this to be a huge issue since I work around this limitation by deleting and re-adding tables (which automatically updates the dbml code). I’ve steered clear of making any custom changes through the designer so I can perform this simple work around.

Recently I ran into an issue that makes me want to make manual changes in the designer. It has to do to the way the relationship property names are generated. Let’s look at an example.

Consider you have a table named Grades that represents school grades like 1, 2, 3, and so forth. Now assume you have another table named Worksheets that represents individual worksheets for each school grade. This is a typical one-to-many relationship. Two of the Worksheets columns are StoreGrade (where a PDF worksheet is stored on disk) and DisplayGrade (which grade to display this worksheet for). This allows a worksheet to be reused for 4th grade even though it is stored in the 3rd grade folder. The StoreGrade and DisplayGrade are foreign keys of the Grades table.

When using the LINQ-to-SQL designer, the above scenario will create a Grade object will two properties called Worksheets and Worksheets1. When using intellisense, it really isn’t possible to know which of these properties is for StoreGrade and which is for DisplayGrade. It is possible to change the property names created by using the designer properties for each relationship, but that would mean I lose the ability to delete/re-add a table with ease.

I’m continuing to hold off making custom designer changes, but this is one scenario I’ve run across that gives a reason to make custom changes via the LINQ-to-SQL designer.

Saturday, November 8, 2008

Measuring Stability and Abstractness

I've always felt that a proper Object-Oriented design must be focused on package and class relationships, and that those relationships should be loosely coupled.

Bob Martin wrote a book back in 1995 called Designing Object-Oriented C++ Applications Using the Booch Method. Chapter 3 of this book explained how to design a loosely coupled architecture based on cohesion and closure. Then in 2006 Bob and his son wrote Agile Principles, Patterns, and Practices in C#. Chapter 28 of this book revisited the topic of loosely coupled architectures.

I have always felt that this topic is crucial to both architecture and design of Object-Oriented solutions, and thus I often revisit the chapters from these books. I finally decided to encapsulate Bob's content into a single page diagram (taken from the second book).

Click here to get it!

Keep this single-page diagram near and dear to your heart...and read Bob's second book, twice.

Friday, November 7, 2008

Visual Studio macro to insert curly braces

I think the most frequently used set of keystrokes I use in development is the process of inserting two curly braces with a blank line in between them.

I found the following Visual Studio macro once on the Internet. You can add this to your list of Visual Studio macros, then use the keyboard configuration to assign it to a keystroke. I've assigned the macro so that when I press Shift-Return, it inserts two curly braces, a blank line in between them, then places the cursor indented on the blank line...ready to type in a statement. I use this keystroke constantly in development.

The macro works fine in both Visual Studio 2005 and 2008.

Sub InsertCurlies()
DTE.ActiveDocument.Selection.NewLine()
DTE.ActiveDocument.Selection.Text = "{"
DTE.ActiveDocument.Selection.NewLine()
DTE.ActiveDocument.Selection.Text = "}"
DTE.ActiveDocument.Selection.LineUp()
DTE.ActiveDocument.Selection.NewLine()
End Sub

Using LINQ's ToLookup

Often times you have a set of data and want to process the data in sets based on some property of the data. For example, you may want to process all employee records based on what state the employee lives in (presumably for tax purposes).

To organize the employees, we need something like a .NET dictionary class, but be able to store multiple employees per key (i.e. state). A dictionary only stores a single object per key so we're out of luck using that class.

This is where the LINQ ToLookup and GroupBy operators come to the rescue. ToLookup is a non-deferred operator and GroupBy is a deferred operator. The example below illustrates how ToLookup can be used to process employee objects, together, based on what state the employee lives in. The GroupBy is basically the same thing, but the data is grouped during enumeration because of its deferred nature.

private class Employee
{
public int ID { get; set; }
public string LastName { get; set; }
public string FirstName { get; set; }
public string StateAbbr { get; set; }
}

static void Main(string[] args)
{
new Program().Run(args);
}

private void Run(string[] args)
{
List<Employee> employees = new List<Employee>
{
new Employee { ID = 1, LastName = "Scott", FirstName = "Michael", StateAbbr = "CO" },
new Employee { ID = 1, LastName = "Smoe", FirstName = "Joe", StateAbbr = "CA" },
new Employee { ID = 1, LastName = "Lampton", FirstName = "Todd", StateAbbr = "CO" },
new Employee { ID = 1, LastName = "Morgan", FirstName = "Jared", StateAbbr = "WA" },
new Employee { ID = 1, LastName = "Smart", FirstName = "Laura", StateAbbr = "ID" },
new Employee { ID = 1, LastName = "Seashell", FirstName = "Sally", StateAbbr = "CO" },
};

ILookup<string, Employee> folksByState = employees
.ToLookup(e => e.StateAbbr);

foreach (IGrouping<string, Employee> stateGroup in folksByState)
{
Console.WriteLine("\nState: {0}", stateGroup.Key);
foreach (Employee e in stateGroup)
{
Console.WriteLine("{0}, {1}, {2}", e.LastName, e.FirstName, e.ID);
}
}
}

Sunday, November 2, 2008

Install Windows Media Player 11 on Windows Server 2003 SP2

I finally found a method that actually works to install Windows Media Player on Windows Server 2003 SP2. I've executed these instructions several times on a VPC, then on my normal server and it was easy. Works great!

1) Download the Windows Media Player 11 setup file and save to disk (e.g. wmp11-windowsxp-x86-enu.exe).

2) Run the setup file.

3) You should see if a Validation Checker dialog. Just leave this dialog box displayed this without clicking any of the buttons.

4) Search your hard drive for the file "wmp11.exe". Once found, use Windows Explorer and go to the directory where this file resides.

5) Copy all the files from this directory to another temporary directory, then close the Validation Checker dialog.

6) Use Windows Explorer and go to the temporary directory you created.

7) Right-click and change Properties on the following files; set Run in Compatibility Mode (on the Compatibility Tab) to Windows XP:

umdf.exe
wmfdist11.exe
wmdbexport.exe
wmp11.exe

8) Run wmfdist11.exe and follow the setup instructions.

9) Run umdf.exe and follow the setup instructions.

10) Reboot the computer.

11) Run wmdbexport.exe (it will execute without anything being displayed).

12) Run wmp11.exe and follow the setup instructions.

13) Reboot the computer.

14) Windows Media Player 11 should now be installed and ready to use.

15) Remove your temporary directory when you are sure everything is working.

Wednesday, October 8, 2008

Fast file compare using .NET HashAlgorithm.ComputeHash

I recently had a huge set of files in dozens of directories that I knew were somehow different from a backup. I needed to know which files were different (I just needed the file names). Normally I would use software like Beyond Compare, but this was not an option since the two sets of files were in two different remote locations without an option of copying 5G worth of data from one location to the other.

The solution I came up with, which worked out great, was to modify a set of code from my C# Recipe book that computed a hash of two files and compared the hash values. The new code accepts a file pattern, recursively finds all matching files (from the current directory), then writes out each files hash value and file name.

I executed the program on both systems, redirecting the output to a text file, then just had to compare the two text files. Note: Be careful that the FileHash program itself and any redirected output files are not included in the file pattern that you are searching on.

Example:

..\FileHash.exe *.* > ..\hashvalues.systemA.txt


using System;
using System.IO;
using System.Security.Cryptography;

namespace FileHash
{
class Program
{
static void Main(string[] args)
{
new Program().Run(args);
}

private void Run(string[] args)
{
// Create the hashing object.
string[] files = Directory.GetFiles(@".\", args[0], SearchOption.AllDirectories);

foreach (string file in files)
{
using (HashAlgorithm hashAlg = HashAlgorithm.Create())
{
using (FileStream fs = new FileStream(file, FileMode.Open))
{
// Calculate the hash for the files.
byte[] hashBytes = hashAlg.ComputeHash(fs);

// Compare the hashes.
Console.WriteLine(string.Format(
"{0} {1}",
BitConverter.ToString(hashBytes),
file));
}
}
}
}
}
}

Monday, September 8, 2008

Zip XML in memory for Web Service transport (SharpZipLib)

Here is a full test program that demonstrates how to use SharpZipLib to zip an XElement into a byte array. This allows you to transfer large XML items over web services, and then unzip then on the web service side. I included two methods to unzip, both back to an XElement and to an XML file. IIS 6 does allow compression as well, but the reason I had to have the functionality below was that a PC client application was required to send a host web service a large set of XML (rather than the host sending the client XML).

using System;
using System.IO;
using System.Xml.Linq;
using ICSharpCode.SharpZipLib.Zip;

namespace ConsoleTest
{
class Program
{
static void Main(string[] args)
{
new Program().Run(args);
}

private void Run(string[] args)
{
// create some xml
XElement xml = XElement.Parse("<xml><element>whatever</element></xml>");

// zip xml
string startXml = xml.ToString();
byte[] bytes = ZipContent(xml, "TestXML");

// unzip xml
xml = UnzipContent(bytes);
string endXml = xml.ToString();

// sanity check
System.Diagnostics.Debug.Assert(startXml == endXml);
}

/// <summary>
/// Convert XML to zipped byte array.
/// </summary>
/// <param name="xml">XML to zip.</param>
/// <param name="entryName">The zip entry name.</param>
/// <returns>A byte array that contains the xml zipped.</returns>
private byte[] ZipContent(XElement xml, string entryName)
{
// remove whitespace from xml and convert to byte array
byte[] normalBytes;
using (StringWriter writer = new StringWriter())
{
xml.Save(writer, SaveOptions.DisableFormatting);
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
normalBytes = encoding.GetBytes(writer.ToString());
}

// zip into new, zipped, byte array
using (Stream memOutput = new MemoryStream())
using (ZipOutputStream zipOutput = new ZipOutputStream(memOutput))
{
zipOutput.SetLevel(9);

ZipEntry entry = new ZipEntry(entryName);
entry.DateTime = DateTime.Now;
zipOutput.PutNextEntry(entry);

zipOutput.Write(normalBytes, 0, normalBytes.Length);
zipOutput.Finish();

byte[] newBytes = new byte[memOutput.Length];
memOutput.Seek(0, SeekOrigin.Begin);
memOutput.Read(newBytes, 0, newBytes.Length);

zipOutput.Close();

return newBytes;
}
}

/// <summary>
/// Return zipped bytes as unzipped XML.
/// </summary>
/// <param name="bytes">Zipped content.</param>
/// <returns>Unzipped XML.</returns>
private XElement UnzipContent(byte[] bytes)
{
// unzip bytes into unzipped byte array
using (Stream memInput = new MemoryStream(bytes))
using (ZipInputStream input = new ZipInputStream(memInput))
{
ZipEntry entry = input.GetNextEntry();

byte[] newBytes = new byte[entry.Size];
int count = input.Read(newBytes, 0, newBytes.Length);
if (count != entry.Size)
{
throw new Exception("Invalid read: " + count);
}

// convert bytes to string, then to xml
string xmlString = System.Text.ASCIIEncoding.ASCII.GetString(newBytes);
return XElement.Parse(xmlString);
}
}

/// <summary>
/// Save zipped bytes as unzipped file.
/// </summary>
/// <param name="bytes">Zipped content.</param>
/// <param name="path">File path to save unzipped XML.</param>
private void UnzipContent(byte[] bytes, string path)
{
// unzip bytes into unzipped byte array
using (Stream memInput = new MemoryStream(bytes))
using (ZipInputStream zipInput = new ZipInputStream(memInput))
using (BinaryWriter writer = new BinaryWriter(File.Create(path)))
{
ZipEntry entry = zipInput.GetNextEntry();

int count;
byte[] input = new byte[1024 * 10];
while ((count = zipInput.Read(input, 0, input.Length)) > 0)
{
writer.Write(input, 0, count);
}
}
}
}
}

Wednesday, September 3, 2008

Use PowerShell to capture database schema

Here is a PowerShell script to capture a database schema. Output is written to a directory/datetime file. Multiple databases/servers can be specified via the XML input file.

To run: ./CaptureSchema.ps1 databases.xml

Here is the PowerShell code:

param ([string]$xmlConfig = $(throw '%argument 1 must be XML configuration file path'))

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null
[System.Reflection.Assembly]::LoadWithPartialName("System.Data") | out-null
[System.Reflection.Assembly]::LoadWithPartialName("System.Core") | out-null
[System.Reflection.Assembly]::LoadWithPartialName("System.Linq") | out-null
[System.Reflection.Assembly]::LoadWithPartialName("System.Xml.Linq") | out-null

function ScriptDatabase([string]$serverName, [string]$dbName)
{
$fileName = [String]::Format("{0} {1}.sql", [DateTime]::Now.ToString("yyyyMMdd_HHmmss"), $dbName)
[Console]::Write("Server: $serverName, Database: $dbName, Output: `"$fileName`" . . . ")

$srv = new-object "Microsoft.SqlServer.Management.SMO.Server" $serverName
$db = new-object "Microsoft.SqlServer.Management.SMO.Database"
$scr = New-Object "Microsoft.SqlServer.Management.Smo.Scripter"

$db = $srv.Databases[$dbName]

$scr.Server = $srv
$options = New-Object "Microsoft.SqlServer.Management.SMO.ScriptingOptions"

$options.ClusteredIndexes = $true
$options.Default = $true
$options.DriAll = $true
$options.Indexes = $true
$options.IncludeHeaders = $true
$options.Triggers = $true
$options.AppendToFile = $false
$options.FileName = "$pwd\$fileName"
$options.ToFileOnly = $true

# output all db tables
$scr.Options = $options
$tables = $db.Tables
if ($tables -ne $null)
{
$scr.Script($db.Tables)
}

# output all sprocs
$options.AppendToFile = $true
$sprocs = $db.StoredProcedures | where {$_.IsSystemObject -eq $false}
if ($sprocs -ne $null)
{
$scr.Script($sprocs)
}

# output all db views
$views = $db.Views | where {$_.IsSystemObject -eq $false}
if ($views -ne $null)
{
$scr.Script($views)
}

"done."

}

function SaveSchema($xmlDb)
{
# make folder if not exists yet
$dbName = $xmlDb.Element("Name").Value
$dirName = ".\$dbName"
if ((Test-Path -path $dirName) -eq $False)
{
"Creating directory $dirName..."
ni -type directory $dirName | out-null
}

# save the schema
$serverName = $xmlDb.Element("Server").Value

$prevDir = $pwd
$prevDir
set-location $dirName
ScriptDatabase $serverName $dbName
set-location $prevDir
}

#
# main
#

$xml = [System.Xml.Linq.XElement]::Load((Resolve-Path "$xmlConfig"))

foreach($db in $xml.Elements("Database"))
{
if ($db.Attribute("Enabled").Value -eq $true)
{
SaveSchema $db
}
}

exit

Here is the XML input file:
<Databases>
<Database Enabled="true">
<Name>DatabaseName</Name>
<Server>ServerName</Server>
</Database>
<!-- repeat the Database element if more
than one database schema to capture -->
</Databases>

Monday, August 18, 2008

Validate a file path using C#

I finally found a good algorithm to validate a file path (had to fix two bugs in it though). I use this static method in Windows Forms when the user is allowed to enter a path. I add a text changed event on the textbox and call this method to enable or disable an OK button.

public static bool ValidateFilepath(string path)
{
if (path.Trim() == string.Empty)
{
return false;
}

string pathname;
string filename;
try
{
pathname = Path.GetPathRoot(path);
filename = Path.GetFileName(path);
}
catch (ArgumentException)
{
// GetPathRoot() and GetFileName() above will throw exceptions
// if pathname/filename could not be parsed.

return false;
}

// Make sure the filename part was actually specified
if (filename.Trim() == string.Empty)
{
return false;
}

// Not sure if additional checking below is needed, but no harm done
if (pathname.IndexOfAny(Path.GetInvalidPathChars()) >= 0)
{
return false;
}

if (filename.IndexOfAny(Path.GetInvalidFileNameChars()) >= 0)
{
return false;
}

return true;
}

Plug-in architecture (dynamically loading DLLs) using LINQ

For implementing a plug-in architecture using the strategy pattern, this is my preferred way of loading some interfaces (DLLs) at runtime. Make sure you look at the second code example showing how to do the same thing in LINQ.

public List<T> LoadDLL<T>(string path, string pattern)
{
    List<T> plugins = new List<T>();
    foreach (string s in Directory.GetFiles(Path.GetFullPath(path), pattern))
    {
        foreach (Type t in Assembly.LoadFile(s).GetTypes())
        {
            if (!t.IsAbstract && typeof(T).IsAssignableFrom(t))
            {
                plugins.Add((T)Activator.CreateInstance(t));
            }
        }
    }

    return plugins;
}

Now using LINQ...
public List<T> LoadDLL<T>(string path, string pattern)
{
    return Directory.GetFiles(Path.GetFullPath(path), pattern)
        .SelectMany(f => Assembly.LoadFile(f).GetTypes()
            .Where(t => !t.IsAbstract && typeof(T).IsAssignableFrom(t))
            .Select(t => (T)Activator.CreateInstance(t)))
        .ToList();
}

If you want to load an assembly and all the dependent DLLs, you can use the same LINQ query, but use LoadFrom rather than LoadFile.

public List<T> LoadDLL<T>(string path, string pattern)
{
    return Directory.GetFiles(Path.GetFullPath(path), pattern)
        .SelectMany(f => Assembly.LoadFrom(f).GetTypes()
            .Where(t => !t.IsAbstract && typeof(T).IsAssignableFrom(t))
            .Select(t => (T)Activator.CreateInstance(t)))
        .ToList();
}

This can then be called using:
List<Foo> foos = LoadDLL<Foo>(@".\", "*.dll");

Thursday, August 14, 2008

Replacing delegates with lamda expressions

If you understand a lamda expression, you realized that it is just another step in the evolution of delegates. Now that Visual Studio intellisense and the .NET compiler can infer from a delegate declaration what parameters are required for a delegate, and their types, we no longer have to use the "delegate" keyword or the parameter types...we just need to specify some parameter names.

For example, an older style event delegate would be done like:

x.Completed += delegate(object sender, EventArgs e) { ... };

Now all we need to code is:

x.Completed += (sender, e) => { ... };

Again, the development environment already knows that the Completed event needs two parameters, a object and a EventArgs; there is no need for us to supply those pieces of information.

Saturday, August 9, 2008

PowerShell and CSV files

If you need to process a CSV file, you can use PowerShell's import-csv command. If headers exist in the first line of the CSV, then they will be used as property names on the resulting import-csv output. For example, if you CSV looks like:

Last,First,Middle
Jones,Fred,S
Smith,Sally,M
Johnson,Bob,L

Then you can output the full names like:

import-csv employees.csv |% `
{[string]::format("{0} {1} {2}",$_.first, $_.middle,$_.last)}

Or if you only want last names that start with a "J", you can:

import-csv employees.csv | `
where {$_.last.startswith("J")} |% `
{[string]::format("{0} {1} {2}",$_.first, $_.middle,$_.last)}

Pretty cool, eh?

PowerShell v2 will have the ability to change what the delimiting character is.

Thursday, August 7, 2008

Format C# code for Blogger posts

This is the tool I like for formatting C# source code in my blog.

http://formatmysourcecode.blogspot.com/

Using ListView and LINQ to display multi-level relationships



This is cool! Assume we have a database that has a three-level relationship. We have a Product, Install, and Document table. Products have installations and installations have related documents for them. This means our Document table has a InstallId and our Install table has a ProductId. This is all standard relationship stuff so hopefully you are following this.

Now assume we want to use LINQ and the ListView web control to display hierarchical data and we want full control over the HTML generated (that's why we use the ListView control). The display will look like:

Product P1
  Installation I1
    Document D1
    Document D2
  Installation I2
    Document D3
Product P2
...

First, use a standard LINQ-to-SQL class in Visual Studio to create your data context object. Next, create a three-level set of ListView objects. Here's the cool part...are you ready for this?

Binding your data: Each ListView needs to be bound to a LINQ IQueryable data source. The outer most ListView, Product, can be just linked to the Products (remember the generated data context adds the plural name to the table name) like:

MultiLevelDataContext db = new MultiLevelDataContext();
IEnumerable<Product> products = db.Products;
lvProduct.DataSource = products;
lvProduct.DataBind();



If you do this in code-behind, that's all you are going to do there. The other two bindings are done in the ListView declaration itself.

The Install and Document nested ListView components just need to have their DataSource property set to the property of its outer ListView object. Remember that when the LINQ-to-SQL code was generated, it automatically added properties for relationship data. For example, the Product class has a property called Installs. The Install class has a property called Documents. These properties basically end up being IQueryable data sources. We simply use a standard Eval() binding statement in the DataSource to connect things up. This makes it incredibly easy to bind the related data into a web control like a three-level ListView structure.

Below is what is all ends up looking like. Two things to mention. Most of the aspx is table formatting. Secondly, notice the DataSource statements in the two nested ListView controls.

Code Behind (cs)

protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
MultiLevelDataContext db = new MultiLevelDataContext();
IEnumerable<Product> products = db.Products;
lvProduct.DataSource = products;
lvProduct.DataBind();
}
}



Web Page (aspx)

<asp:ListView ID="lvProduct" runat="server">
<LayoutTemplate>
<table cellpadding="3" cellspacing="0" border="1" style="width: 100%; background-color: Silver;">
<tr runat="server" id="itemPlaceholder" />
</table>
</LayoutTemplate>
<ItemTemplate>
<tr>
<td>Product:
<%# Eval("Name") %>
</td>
</tr>
<asp:ListView ID="lvInstall" runat="server" DataSource='<%# Eval("Installs") %>'>
<LayoutTemplate>
<tr>
<td>
<table cellpadding="3" cellspacing="0" border="1" style="width: 100%; background-color: Aqua;">
<tr runat="server" id="itemPlaceholder" />
</table>
</td>
</tr>
</LayoutTemplate>
<ItemTemplate>
<tr>
<td>Version:
<%# Eval("Version") %>
</td>
<td>Release Date:
<%# Eval("ReleaseDate") %>
</td>
</tr>
<asp:ListView ID="lvDocuments" runat="server" DataSource='<%# Eval("Documents") %>'>
<LayoutTemplate>
<tr>
<td colspan="2">
<table cellpadding="3" cellspacing="0" border="1" style="width: 100%; background-color: Lime;">
<tr runat="server" id="itemPlaceholder" />
</table>
</td>
</tr>
</LayoutTemplate>
<ItemTemplate>
<tr valign="top">
<td>
<%# Eval("Name") %>
</td>
<td>
<%# Eval("Description") %>
</td>
</tr>
</ItemTemplate>
</asp:ListView>
</ItemTemplate>
</asp:ListView>
</ItemTemplate>
</asp:ListView>

Monday, August 4, 2008

Synchronizing LINQ-to-SQL with database schema

The initial release of the LINQ-to-SQL designer support in VS 2008 doesn't have a good way of keeping changes in the database schema synchronized with the dbml (designer) data. I've really only found two products so far that claim to do this for you.

Huagati DBML Tools
http://www.huagati.com/dbmltools/

Database Restyle by Perpetuum Software
http://www.perpetuumsoft.com/Product.aspx?lang=en&pid=55

I have not personally tried either of these yet.

Wednesday, July 30, 2008

Capturing and viewing WCF SOAP messages

When using WCF it is sometimes required to examine the incoming and outgoing SOAP messages. You can do this by modifying your WCF configuration file, then viewing the messages using a utility installed alongside Visual Studio.

The easiest way configure your configuration file is to use the Visual Studio menu,
Tools | WCF Service Configuration Editor. Use the utility to open up your WCF configuration file (the place where all your WCF endpoints are defined). Next, access the Diagnostics section to enable logging for incoming and outgoing SOAP messages (Message Logging in the tree view). There are various options in the right pane of the display. If you want to see the entire SOAP messages, be sure to enable the LogEntireMessage option in the Message Logging section (it took me a while to figure out this one).

Perform a File | Save and now your WCF configuration should be ready to go. When you run your WCF application, a new xxxx.svclog file will be created in the root directory of your Visual Studio project (you can change this via the WCF configuration file).

To view the message log file, use the SvcTraceViewer.exe program from Microsoft. It’s normally located in \Program Files\Microsoft SDKs\Windows\v6.0A\Bin. Use the File | Open to find the xxxx.svclog file. There are several different ways to view your data. I like the "Messages" view the best.

When you are done with your diagnostics, you can use the WCF Service Configuration Editor to disable logging.

Monday, July 28, 2008

PowerShell to encrypt / decrypt app.config sections

Here is a PS script called AppConfigCrypto.ps1 that allows you to encrypt and decrypt sections of an appConfig. Be aware that once a config is encrypted, you can't just copy it from machine to machine since the encryption is done via the default machine key. You should be able to get around this by importing your own keys and modifying the script below. If you don't import a user specified key, then you will have to encrypt on the machine where the application will execute.

Here's the PS script:

param(
[string]$sectionName,
[string]$exePath="app.config",
[switch]$encrypt,
[switch]$decrypt)

function CallExit($msg)
{
$msg
Usage
exit
}

function OKExit($msg)
{
$msg
exit
}

function Usage
{
"Usage: ./AppConfigCrypto.ps1 sectionName exePath [-encrypt | -decrypt]"
}

# check params
if ($sectionName.Trim().Length -eq 0) { CallExit("%You must pass a section name (e.g. appSettings, ConnectionStrings)") }
if ($encrypt -eq $false -and $decrypt -eq $false) { CallExit("%Must specify -encrypt or -decrypt") }
if ($encrypt -ne $false -and $decrypt -ne $false) { CallExit("%Must specify either -encrypt or -decrypt") }

# load the config
$config = [System.Configuration.ConfigurationManager]::OpenExeConfiguration((Resolve-Path $exePath))

# make sure section exists and is readable
$section = $config.GetSection($sectionName)
if ($null -eq $section) { CallExit("%$sectionName section not found") }
if ($section.IsReadOnly()) { CallExit("%$sectionName is read-only") }

if ($encrypt)
{
if ($section.SectionInformation.IsProtected -eq $true) { OKExit("%Section already encrypted") }
"Encrypting $sectionName . . ."
$section.SectionInformation.ProtectSection("RsaProtectedConfigurationProvider")
}
elseif ($decrypt)
{
if ($section.SectionInformation.IsProtected -eq $false) { OKExit("%Section already decrypted") }
"Decrypting $sectionName . . ."
$section.SectionInformation.UnprotectSection()
}

# save section
$section.SectionInformation.ForceSave = $true
$config.Save()

Tuesday, May 20, 2008

Update: Clear Visual Studio "Recent Project" entries

I had posted last October how to clear the recent project/file list in Visual Studio via a register cleanup. Recently I found a really nice VS add-in to do this. Very nice for VS 2008.

http://www.csharper.net/blog/visual_studio_2008_add_in_compatibility.aspx

Wednesday, May 14, 2008

LINQ cheat sheets for deferred and non-deferred

Here are cheat sheets for the LINQ deferred and non-deferred extension methods. Included are the page number references of where these are found in Joseph Rattz's book, Pro LINQ by Apress.

Format is legal, 14" x 8.5"

Click here

Aggregation: LINQ and SQL XML fields

I'm thinking ahead about the possibility of having to store some XML snippets in a database XML field. How can we store XML data in the database and use LINQ to easily retrieve it...and report on it. There are lots of examples out there, but I want to look at a more difficult scenario, one involving a variable XML data and aggregating that data from multiple records.

I've dreamed up a scenario based on a survey to collect the opinions of partners. When a partner takes the survey, there could be 1 to n questions on various opinions. I say n since the number of questions could change over time (plus I want to model variable XML data for this example).

For each partner, we decide to store the opinions in a single database XML field. For this example, I use the values "Choice X", but in reality it might be something like "Favor debit cards" or "Want e-mail rewards". Here is an example of what might be stored in the XML field for a single partner record, for a single survey:

<Opinions>
<Opinion>Choice 1
<Opinion>
<Opinion>Choice 2<Opinion>
<Opinion>Choice 5<Opinion>
</Opinions>

Don't get hung up on whether this is a correct way to store survey results...that's not the purpose here.

Notice that since certain opinions were not selected, they were not included in the XML (e.g. Choice 3-4 are missing). Assuming that the opinions are a simple set of checkboxes on a survey page, the database XML column value should be created like:

XElement results = new XElement("Opinions",
cblOpinion.Items.OfType<ListItem>()
.Where(o => o.Selected)
.Select(o => new XElement("Opinion", o.Text)));

These XML results are stored in the database. At some point we are going to have to report on the opinions of the partners. Assuming we can search the database to obtain partner records for a certain survey within a certain time period, we'd like to report on data like:

Choice 1, 378
Choice 2, 120
Choice 3, 629

I spent several iterations trying to figure out the best way to do this. In the end, as I suspected, it was easier that I thought. It wouldn't surprise me to have someone else find even an easier way as LINQ is a powerful language. To accomplish this the code below performs two steps, 1) get all survey opinions from the database, creating a sequence of XElement with all the opinions, and 2) create grouping sequence of all like-opinions. The example uses the "Linq To Sql" support in VS 2008.

// Step #1
// extract out all survey opinions from the DB and create one
// xml element
DbDataContext db = new DbDataContext();
XElement allOpinions = new XElement("Survey", db.Opinions
.Select(x => x.Opinions) // database column is called Opinions
.Select(x => x));

// Step #2
// group like opinions together (could have been done along with
// previous linq statement, but kept separate since may be other
// queries to perform on allOpinions
var groups = allOpinions.Descendants("Opinion")
.OfType<XElement>()
.GroupBy(x => x.Value); // e.g. "Choice 1"

I can output the result of #1 and #2 as:

// Output #1
Console.WriteLine(allOpinions);

// Output #2
foreach (IGrouping<string, XElement> g in groups)
{
Console.WriteLine("{0},{1}", g.Key, g.Count());
}

Monday, May 5, 2008

Visual Studio 2008 Poster (C# key bindings)

I took the Microsoft Visual Studio 2008 Key Bindings poster and formatted to fix a standard 30x20 poster. Also provided in JPG rather than PDF format in case anyone wants to re-scale it.

Visual Studio 2008 Key Bindings Poster

Sunday, May 4, 2008

CheckBoxList, Linq, and XML

I’ve been playing around with the new Linq stuff in .NET 3. I’m finding it’s possible to do things much easier with Linq, as well as with significantly less code. Recently I had to take all the selected items in an asp.net checkbox list (cblOpinion) and write them to an XML file (filePath). The code turned out to be a single statement:

new XElement("Opinions",
cblOpinion.Items.OfType()
.Where(o => o.Selected)
.Select(o => new XElement("Opinion", o.Text)))
.Save(filePath);

This created a nice XML file that looked like:



opinion a...
opinion b...
opinion c...
opinion d...


Pretty nifty!

Wednesday, April 23, 2008

Commands to increase performance of MS Server 2008

To increase performance of MS Server 2008...

netsh interface tcp set global autotuninglevel=disabled
netsh interface tcp set global rss=disabled


This must be run with admin privs.

Monday, April 21, 2008

Can't compile Linq after VS 2008 migration

I migrated a VS 2005 asp.net website (not using Linq) to VS 2008. Initially, the website compiled fine, but once I tried to add Linq statements, it would not compile.

I had already done the following to prepare for Linq use:

1) Added in the web.config so System.Core (where Linq lives) would be a reference (copied several of these from a virgin VS 2008 asp.net site)

2) Modified the site Build properties and targeted .NET 3.5

3) Added "using System.Linq"

But it still would not compile...all Linq statements were not recognized (even Intellisense worked). It was like I was still using the .NET 2.0 compiler.

When I looked back at the web.config from a virgin VS 2008 asp.net site, I realized I needed to also have the section. I copied this entire section into my web.config and it now compiles fine (with the .3.5 compiler).

It would have been nice if the conversion tool from VS 2005 to 2008 would have done this for me. I have the first official release of VS 2008.

Saturday, March 29, 2008

This collection already contains an address with scheme http. There can be at most one address per scheme in this collection.

Description:
When you try to access a WCF service hosted in IIS, you get the following error:

This collection already contains an address with scheme http. There can be at most one address per scheme in this collection.

Context:
You have a website hosted in IIS and the site has multiple host headers defined for the site (e.g. www.domain.com, domain.com).

Fix:
There doesn’t appear to be a nice workable solution for this as of VS 2008 initial release on .NET 3.5. The only “hack” found out there is to define a new Factory for the service.

Step 1:

namespace Foo
{
public class CustomHostFactory : ServiceHostFactory
{
protected override ServiceHost CreateServiceHost(
Type serviceType,
Uri[] baseAddresses)
{
// Specify the exact URL of your web service from the config file:
// e.g. http://www.domain.com/service/myservice.svc
Uri webServiceAddress =
new Uri(ConfigurationManager.AppSettings["ServiceUri"]);

ServiceHost webServiceHost =
new ServiceHost(serviceType, webServiceAddress);

return webServiceHost;
}
}
}

Some web articles had you also creating a new host derived from ServiceHost, but that is not needed.

Step 2:
Modify the .svc file on the site hosting the service (not the .svc in Visual Studio):

<%@ ServiceHost Service="Foo.Service1" Factory="Foo.CustomHostFactory" %>

The above .svc content may also include the language and debug specifiers.

Can't add web reference in VS 2008 due to computer name being used for schemaLocation

Description:
You can’t add a web reference using Visual Studio 2008 for a WCF service since the WSDL is using the computer name rather than the service address. If you develop using "localhost" as the service name this is NOT an issue.

Context:
You have a computer/website with more than one IP address and are developing a WCF web service on one of the extra IP addresses (Site A).

Issue:
When you use Visual Studio 2008 to add a reference to a web service on Site A, you will get an error something like:

The document at the url http://192.168.0.54:8000/abcService.svc was not
recognized as a known document type.
The error message from each known type may help you fix the problem:
- Report from 'WSDL Document' is 'The document format is not recognized (the
content type is 'text/html; charset=utf-8').'.
- Report from 'DISCO Document' is 'There was an error downloading
'http://192.168.0.54:8000/abcService.svc?disco'.'.
- The request failed with HTTP status 404: Not Found.
- Report from 'XML Schema' is 'The document format is not recognized (the
content type is 'text/html; charset=utf-8').'.

If you look manually enter the service url in a browser for the service at the IP address (e.g. 192.168.0.54), you will see in the XML WSDL that it is trying to import schemas (schemaLocation) using the computer name, not the service address (192.168.0.54).

Fix:
I fixed this by just making sure I entered the IP address as the host header in IIS. Then the IP address was used rather than the computer name for all WSDL url references.

Thursday, January 3, 2008

Microsoft SQL, stored procedures, optional parameters

Here is a good article on the pros/cons and performance considerations of using optional parameters in SQL stored procedures:

http://www.sommarskog.se/dyn-search.html

Calling Web Services from SQL 2005

Here is an article on how to call Web Services from SQL 2005. The article addresses SQL triggers, but the same approach can be used for user-defined functions or stored procedures.

http://www.codeproject.com/KB/database/SQLCLR.aspx

Can't RDP? How to enable / disable virtual machine firewall for Azure VM

Oh no!  I accidentally blocked the RDP port on an Azure virtual machine which resulted in not being able to log into the VM anymore.  I did ...