.NET+

Friday, October 21, 2011

Protects ClickOnce using ASP.NET Forms Authentication

"ClickOnce does not support ASP.NET forms-based authentication because it uses persistent cookies; these present a security risk because they reside in the Internet Explorer cache and can be hacked. Therefore, if you are deploying ClickOnce applications, any authentication scenario besides Windows authentication is unsupported."

- from Securing ClickOnce Applications by Microsoft

Even though Microsoft says that ClickOnce doesn't support ASP.NET Forms Authentication, there is a workaround.

Understanding How a ClickOnce Application is Downloaded

When access the ClickOnce deployment manifest (.application) file from IE, files are downloaded in following sequence:

  1. IE downloads the .application file.
  2. ClickOnce engine takes over the control and downloads the .application file again from the same URL.
  3. ClickOnce engine downloads the .application file from the update location (deploymentProvider codebase in the .application file).
  4. ClickOnce engine downloads the application manifest (.manifest) file specified in the .application file.
  5. ClickOnce engine downloads the files specified in the .manifest file using relative path based on the URL of the.manifest file.

As the ClickOnce engine doesn't support cookies, without a workaround step 2 will fail.

The Solution

You can find a workaround in this article: "Make ClickOnce Work With ASP.NET Forms Authentication". The trick is the Cookieless Forms Authentication introduced since ASP.NET 2.0. Please be aware that demo code from that article is for WPF Brower Application (.xbap), which don't have a "update location" in the .xbap file, therefore no step 3. For WinForm s/WPF applications, the workaround will fail in step 3 as IIS cannot find the Forms Authentication ticket from the URL. So in step 2, there is a need to embed the ticket into the deploymentProvider codebase, as same as the .manifest file URL, using the DeploymentUrl. As cookie is not supported, in step2, ticket(FormsIdentity) should be retrieved from context.User.Identity. Ticket can be embedded as either path or query string.

The configuration in the demo code is for IIS7.0 Classic mode as specified by the preCondition. For IIS7.0 Integrated mode, the configuration is:

<handlers>
  <add name="Clickonce manifest file" path="*.application" verb="*" type="ClickOnceHandler.ClickOnceApplicationHandler,ClickOnceHandler" resourceType="Unspecified" preCondition="" />
  <add name="Clickonce files" path="*.deploy" verb="*" type="System.Web.StaticFileHandler" resourceType="Unspecified" preCondition="" />
</handlers>

Please see this article for more information about the configuration: ASP.NET Integration with IIS 7.

Upgrade Consideration

When a checks for update, the ClickOnce engine will access the .application file and the .manifest file.

If these files are protected, you will need to get a Forms Authentication ticket and embed it in the URL to access them, otherwise, you need to detect the update by your own code.

It should be acceptable to leave these two types of files unprotected (as configured above). Even though, you still need a ticket to upgrade the application.

 The Pitfalls

While Forms Authentication ticket is encrypted then embedded into the URL, the length of the URL path is quite easy to exceed 260 characters. This may cause HTTP 400 Bad Request error.This could be the limitation of ASP.NET 2.0 and/or IIS. It recommends to upgrade to ASP.NET 4.0, and increase the UrlSegmentMaxLength registry value if there is a need - see more details: Http.sys registry settings for IIS.

The configuration in web.config must be consistent with the Application Pool setting (Integrated/Classic), otherwise the configuration will not take effects.

Sunday, October 9, 2011

Using WCF Web Service as SSRS Report Data Source

From SQL Server 2005, Reporting Services supports XML data source type, which allows report consuming Web service data source. Here are some examples:

There is no a detailed documentation about this topic. All examples I found on the Internet also not cover complex scenarios, ex. passing multiple parameters. As supplement, I'd like to share my findings and best practices in an example.

Example

The data contract:

[DataContract]
public class MyDataContract

    [DataMember]
     public int Id { get; set; }
      [DataMember]
     public string Name { get; set; }
     [DataMember]
     public DateTime Birthday { get; set; }
}

The message contracts:

[MessageContract]
public class MyReportMessageContractRequest
{
     [MessageBodyMember(Order = 0)]
     public DateTime StartDate { get; set; }
     [MessageBodyMember(Order = 1)]
     public DateTime FinishDate { get; set; } }
[MessageContract]
public class MyReportMessageContractResponse
{
     [MessageBodyMember]
     public List<MyDataContract> DataItems { get; set; } }

The service contract:

 public class IMyReportService
{
    [OperationContract]
    MyReportMessageContracttResponse GetMyReportDataMethod(MyReportMessageContractRequest request);
}

The Query of the report Dataset:

<Query>
  <Method Name="MyReportMessageContractRequest" Namespace="http://tempuri.org/">
    <Parameters>
      <Parameter Name="StartDate">Parameters!StartDate.Value</Parameter>
      <Parameter Name="FinishDate">Parameters!FinishDate.Value</Parameter>
    </Parameters>
  </Method>
  <SoapAction>http://tempuri.org/IMyReportService/GetMyReportDataMethod</SoapAction>
  <ElementPath IgnoreNamespaces="true">     MyReportMessageContractRespose{}/DataItems{}/MyDataContract {Id (integer), Name, Birthday (Date)}
  </ElementPath>
</Query>

The Best Practices

  1. The WCF service must be exposed via BasicHttpBinding.
  2. auth.
  3. Uses Message Contracts and specify the Order of MessaeBodyMember.The order of the body elements is alphabetical by default, if the order is different than specified in the query, report will not pick it up.
  4. Uses type name of the service method parameter (MyReportMessageContractRequest) as the method name in the query, instead of the actual method name (GetMyReportDataMethod).
  5. The namespace should be "http://tempuri.org/" (ends with "/") rather than "http://tempuri.org".
  6. A parameter referred as Parameters!(parameter name).Value is defined in the Dataset (not the report parameters).
  7. Uses IgnoreNamespaces="true" to simplify the namespace processing.
  8. Specify the field name and data type. These data types are supported: String, Integer, Boolean, Float, Decimal, Date, XML. If not specified, report will cast it to string.
  9. When to Use Web Service Data Source? Here are two typical scenarios when Web services are used:
    • Data is not in the database (ex. it requires to satisfy certain report output requirements before saving the data, in this case, Web service can cache the data in the memory, and report will pick it up without touching the database).
    • Report data is a result of complicated calculations that are difficult to be implemented and debugged using SQL scripts.
  10. Trouble: when using a shared report server, it may cause trouble in debug mode – you need to consider which machine report server should call, and configure it properly.
  11. When design Web services, consider how SSRS process the message: it maps the XML returned by the Web service to Datasets, and it will make one Web service call per Dataset.

Thursday, August 11, 2011

How to Detect Database Mirroring Failovers

My previous blog, Caching in Entity Framework using Query Notifications, is about how to implement caching based on Query Notification using Entity Framework and LinqToCache. I mentioned that a query notification SqlDependency was not able to detect changes after a database mirroring failover. To resolve this problem, SqlDependency need to recreated on the new principal database. To achieve this, we need to know when a failover occurs. In this blog, I'll provide three approaches to detect database mirroring failover.

Approach 1 (Recommended) - Get and Compare the DataSource from an Opened Database Connection

When connect to a mirrored database, we specify failover partner in the connection string like this:

"Data Source=MyDBServer1;Failover Partner=MyDBServer2;Initial Catalog=MyDatabase;Integrated Security=True;MultipleActiveResultSets=True"

The current server name can be retrieved via the DataSource property of an opened SqlConnection object (more details can be found in this article: Database Mirroring in SQL Server (ADO.NET)). By storing the value in a static variable and comparing with new value, we can determine if the current server is changed (in another word, if there is a failover).

When and how to get an opened connection depends on the application itself. If Entity Framework (EF) is used, the SqlConnection can be retrieved via the Connection property of an ObjectContext object. By default, EF control when to create, open and close database connections. You may need to open a connection manually as described in this article: Managing Connections and Transactions.

Approach 2 - Event Notification and Service Broker

We expect to get an event when there is a database mirroring failover. SQL Server does provide these events. Here is the details: Database Mirroring State Change Event Class. We can create event notification for states: 7 - Manual Failover and 8 - Automatic Failover:

create event notification failover
on server
for database_mirroring_state_change
to service 'myservicename', 'current database';

Then use WAITFOR(RECEIVE...) command to periodically check the notifications. More details can be found at here.

This approach need Service Broker programming skills. If the principal database/server crashes, WAITFOR(RECEIVE...) command needs to be executed on the new principal database to get the event - there should be an event message for the failover after the mirror database becomes principal.

Approach 3 - System Stored Procedures

We can execute sp_dbmmonitorresults periodically on msdb database on one or both database servers to get the status.

The drawback of this approach is that the stored procedure requires membership in the sysadmin fixed server role or in the dbm_monitor fixed database role in the msdb.

Tuesday, February 1, 2011

Caching in Entity Framework using Query Notifications

Query Notifications were introduced in SQL Server 2005 and SQL Server Native Client. It normally can be used via SqlDepedency and SqlCommand. However, Entity Framework hides the SqlCommand from the developers. Luckily, Mike Pizzo from the ADO.NET revealed a tip that resolved the problem. It can be found in Ryan Dunn's blog: USING SQLDEPENDENCY OBJECTS WITH LINQ. The tip is:
System.Runtime.Remoting.Messaging.CallContextt.SetData("MS.SqlDependencyCookie", sqlDependency.Id)
Basically, when "MS.SqlDependencyCookie" is set to a SqlDepedency's Id, the SqlDependency will be automatically associated with the SqlCommand created by Entity Framework. An open source project LinqToCache automates this process by extending the IQueryable interface.
The Prerequisites to use Query Notifications
  • Enable Service Broker in the database.
Tip: This statement can be used: ALTER DATABASE [DBName] SET ENABLE_BROKER. It requires exclusive access to the database. Without exclusive access, it will hang up without ending. To get exclusive access: ALTER DATABASE [DBName] SET SINGLE_USER WITH ROLLBACK IMMEDIATE.
  • Enable Query Notifications by calling SqlDependency.Start().
Pros
Query Notifications is very efficient especially when there are multiple application servers. It allows applications to be notified when data has changed - no matter how data is changed and which client makes the change.
Cons
A query notification dependency is not able to detect changes after a database mirroring failover - to resolve this issue, add code to monitor database mirroring failovers. Then, re-execute the relevant commands that are used together with the dependencies.
EF may generate complex query that are not supported by Query Notifications. See Creating a Query for Notification for the limitations.

Tuesday, March 23, 2010

Data Transmission Issue over VPN

Some time back, I helped a friend fix a VPN data transmission issue on Windows Mobile. I’d like to write it down for reference.

SYMPTOMS

An application on Windows Mobile 6.0 could transfer data to the server via TCP/IP, but failed to do so when connected via Vodafone’s VPN.

CAUSE

By default in Windows, Path Maximum Transmission Unit (PMTU) Discovery is enabled. When connect to network via VPN. The TCP maximum segment size (MSS) value returned by PMTU doesn’t count the the additional header added by VPN, so the size of the IP datagram exceed the maximum size allowed by the network routing path. And also when TCP segments are destined to a non-local network, the "do not fragment" bit is set in the IP header, as the result, the IP datagram cannot be fragmented then transferred.

RESOLUTION

MORE INFOMATION

There is a very good article on CISCO’s website about this: Resolve IP Fragmentation, MTU, MSS, and PMTUD Issues with GRE and IPSEC.

Thursday, December 31, 2009

Reporting on SharePoint Lists Data

When create reports based on the data in SharePoint lists, you need to consider how to access data and how it is presented.

How to access data

There are 3 approaches to access SharePoint lists data for reporting:

  1. Access SharePoint database directly. This option provides best performance and join capabilities. However if Microsoft change the database schema, the queries will be broken. Here is a example: Reporting on SharePoint lists from Microsoft SQL Reporting Services.
  2. Access via SharePoint lists Web service. This option reduces the compatibility risks, but performance may become an issue, especially large volume of data is accessed. Also it will be challenging if need data joined from several lists.
  3. Export list data to an external database and then reporting on the data from the database. This option cannot provide “live” data for reports.

There are some third party components based on these three approaches. Here are more details: Reporting on List Data in SharePoint.

How to present reports

There are also 3 types of UI to present the reports:

  1. Web Parts. This option integrates very well with SharePoint, but may not has enough UI element to present complex reports. Following Web Parts can be used:
    • Lists Web Part
    • Data View / Data Form Web Part
    • Content Query Web Part (CQWP)
    • Business Data Catalog (BDC) Web Parts
    • 3rd Party Web Parts
  2. Reporting system. Reporting system like SQL Server Reporting Services (SSRS) provides powerful reporting capabilities. SSRS can access SharePoint lists via SharePoint list Web service using XML custom data source. Here is a example: SQL Reporting Services data from SharePoint lists. In SQL Server 2008 R2 SharePoint List Data Extension can be used.
  3. Custom UI. This option is very flexible, but also costly. UI can be built using ASP.NET, Silverlight etc.

Thursday, October 29, 2009

Use Items in MsBuild

Following methods are applicable to Team Foundation Build 2008 (MSBuild 3.5).

Define Items in an Item Definition Group

<ItemGroup>
   <Environment Include="Dev">
      <Server>DevServer</Server>
   </Environment>
   <Environment Include="Test">
      <Server>TestServer</Server>
   </Environment>
   <Environment Include="Prod">
      <Server>ProdServer</Server>
   </Environment>
</ItemGroup>

Dynamically Create Items by CreateItem Task with Wildcard

<CreateProperty Value="$(DropLocation)\$(BuildNumber) \x86\Release" >
      <Output TaskParameter="Value" PropertyName="Release" />
</CreateProperty>
<CreateItem Include="$(Release)\_PublishedWebsites\Website1\**\*.*">
      <Output TaskParameter="Include" ItemName="Website1Files"/>
</CreateItem>

Create items with additional metadata:

<CreateItem Include="$(Release)\_PublishedWebsites\Website1\**\*.*" AdditionalMetadata="ToDir=$(Release)\Environments\%(Environment.Identity)">
      <Output TaskParameter="Include" ItemName="ItemsWithToDir"/>
</CreateItem>

Manipulate Items

Copy a all files of a Web application to a folder:

<Copy SourceFiles="@(Website1Files)" DestinationFiles="@(Website1Files->'$(Release)\Website1Copy\%(RecursiveDir)%(Filename)%(Extension)')"/>

Copy a files of a Web application to folders for Dev, Test & Prod staging environments:

<Copy SourceFiles="@(ItemsWithToDir)" DestinationFiles="@(ItemsWithToDir->'%(ToDir)\%(RecursiveDir)%(Filename)%(Extension)')"/>

Update configuration for environments (FileUpdate is one of the MsBuild Community Tasks):

<FileUpdate Files="$(Release)\Environments\%(Environment.Identity)\web.config" Regex="myserver" ReplacementText="%(Server)" IgnoreCase="True" />

"myserver" will be replaced by "DevServer" in Dev\wb.config; and by "TestServer" in Test\web.config, etc.

See Also