LINQ Expression Trees and the Specification Pattern

Over the past couple of months I have tried to immerse myself in domain-driven design, which includes learning about its purpose, the methodology, and the domain patterns presented in Evans’ book and built upon in many other venues (blogs, conferences, etc.). While I have not worked on a full-fledged DDD project, I have fiddled with a lot of patterns. One of these is the Specification pattern, which says to introduce a predicate-like Value Object into the domain layer whose purpose is to evaluate whether an object meets some criteria. From what I’ve read in Evans’ book, specification objects typically have an isSatisfiedBy method that takes a domain object and returns a boolean. The specification therefore encapsulates a predicate that can be used to test an object to see if it satisfies the criteria.

image

The problem that Evans later calls out is that of querying a data store using specification objects as filters. Because using the specification to filter records from the database requires that those records be selected and reconstituted into objects, it can be inefficient for some applications to use specification objects as is. (Imagine using a specification object on one million rows in the Customer table just to find the gold Customers!) Surely we can do better.

Ideas

One idea in the book is to allow a repository to help with the implementation and utilize double dispatch to keep the separation of domain and infrastructure in tact. Application code calls a method on a repository to query for objects based on a specification. That repository passes itself to a method on the specification object, so the specification can utilize the repository’s power to query for the objects that fulfill the criteria, and then return that data to the application.

image

Another alternative is to harness the power of LINQ and expression trees to represent the predicate that the specification object encapsulates. This means that we can (1) use the expression trees in the infrastructure to let the data store take care of filtering and (2) still represent our rule in one location without resorting to compromises in the repository API.

Expression trees are abstract syntax trees that can represent the predicates that specification objects strive to encapsulate. With these expression trees, certain O/R mappers like LINQ to SQL, the Entity Framework, and LLBLGen Pro can determine the intent of the code and translate it into the corresponding T-SQL code to run against the database.

Creating an expression tree is very simple. In fact, if you’ve used any of the O/R mappers I mentioned above, you’ve probably used them already. Here’s an example of an expression tree being used in LINQ to SQL to generate the WHERE clause in the corresponding T-SQL query below.

NorthwindDataContext db = new NorthwindDataContext();

db.Products.Single(p => p.ProductName == "Aniseed Syrup");

SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

FROM [dbo].[Products] AS [t0]

WHERE [t0].[ProductName] = @p0

– @p0: Input NVarChar (Size = 13; Prec = 0; Scale = 0) [Aniseed Syrup]

– Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1

Normally the lambda expression ‘p => p.ProductName == "Aniseed Syrup"’ would be treated as a Func<Product, bool>. However, in this particular usage the compiler infers that it is an Expression<Func<Product, bool>>. The difference means that the LINQ to SQL library no longer has a method pointer. Instead, it has a tree which represents what that method does. LINQ to SQL can visit the nodes in this tree and translate what it finds into SQL without ever invoking the code itself. A simple Func does not have that capability; it is simply a method pointer, like any other delegate type.

I hope you start to see how expression trees and the specification pattern can be very powerful together. If in addition to exposing an isSatisfiedBy method on the specification object, we add something which exposes the raw Expression, the repository can compose this Expression into the query and filter the results using the infrastructure. Let’s look at some code.

For this example, let’s continue to use the Products table from Northwind. The specification we implement here will tell us whether a product is a low stock product i.e. whether the number of units in stock for a particular product falls below a certain threshold. That threshold is defined in another system, so we will feed that data to the specification.

Let’s start with the basics. Here’s the base class for all Specifications. Instead of using IsSatisfiedBy, we expose a method which returns an expression tree of type Expression<Func<T, bool>>.

public abstract class Specification<T>

{

    public abstract Expression<Func<T, bool>> IsSatisfied();

}

The Expression class is in the System.Linq.Expressions namespace which is a part of System.Core.dll. Remember, this is just a different representation of IsSatisfiedBy; instead of keeping the logic embedded in a method in the specification object, we package the logic in an expression tree. The predicate still receives an object and returns a boolean. Other classes, like the ProductRepository, can now leverage this expression tree to optimize the query it sends to the database.

public partial class ProductRepository : IProductRepository

{

    public IQueryable<Product> SelectSatisfying(Specification<Product> specification)

    {

        return this.context.Products.Where(specification.IsSatisfied());

    }

}

Here we use the Entity Framework to select the Products that match a certain Product Specification. (The field "context" is the ObjectContext, in this case.) However, we could switch this for any data access technology that can leverage expression trees and retrieve similar results.

The next step is to implement the actual specification.

public class LowStockSpecification : Specification<Product>

{

    public LowStockSpecification(int lowStockThreshold)

    {

        this.LowStockThreshold = lowStockThreshold;

    }

 

    public int LowStockThreshold

    {

        get;

        private set;

    }

 

    public override Expression<Func<Product, bool>> IsSatisfied()

    {

        return p => p.UnitsInStock < this.LowStockThreshold;

    }

}

Evans says that specifications should be value objects, so I’ve taken that to heart and made this class immutable. This allows us to make some optimizations with specifications (caching the expression tree, introducing an IsSatisfiedBy by reusing the logic in the expression tree, etc.) if we would like.

This final code snippet shows how to leverage the specification and repository together.

public class ProductReorderingService

{

    private IProductRepository productRepository;

 

    public ProductReorderingService(IProductRepository productRepository)

    {

        this.productRepository = productRepository;

    }

 

    public void ReorderLowStockProducts()

    {

        LowStockSpecification spec = new LowStockSpecification(5);

        foreach (var p in this.productRepository.SelectSatisfying(spec))

        {

            // Reorder product

        }

    }

}

Composing Specifications

One property of specifications is that they can be combined to form more interesting predicates. This would allow our ProductRepository to support queries that involve multiple specification instances—for example, a filter that checks for units with low stock OR units whose stock is below their re-order level. The most common implementation I’ve seen of this requirement involves three new classes, AndSpecification<T>, OrSpecification<T>, and NotSpecification<T>. While it’s easy enough to implement these when all you worry about is IsSatisfiedBy (e.g. spec1.IsSatisfiedBy(o) && spec2.IsSatisfiedBy(o) for the AndSpecification<T>), it’s actually a bit tricky to do this with expressions.

Fortunately, it’s not impossible, and Colin Meek has it documented on his blog post about combining predicates in the Entity Framework, but the concepts apply more generally to any provider that can use expression trees. Be careful though; if you’re using the Entity Framework you will have to copy more code than you would with LINQ to SQL. I am not sure about LLBLGen Pro.

If you use the extension methods that Colin provides for AND’ing and OR’ing expression trees together, you’ll end up with these implementations of AndSpecification<T> and OrSpecification<T>:

public class AndSpecification<T> : Specification<T>

{

    private Specification<T> spec1;

    private Specification<T> spec2;

 

    public AndSpecification(Specification<T> spec1, Specification<T> spec2)

    {

        this.spec1 = spec1;

        this.spec2 = spec2;

    }

 

    public override Expression<Func<T, bool>> IsSatisfied()

    {

        return this.spec1.IsSatisfied().And(this.spec2.IsSatisfied());

    }

}

public class OrSpecification<T> : Specification<T>

{

    private Specification<T> spec1;

    private Specification<T> spec2;

 

    public OrSpecification(Specification<T> spec1, Specification<T> spec2)

    {

        this.spec1 = spec1;

        this.spec2 = spec2;

    }

 

    public override Expression<Func<T, bool>> IsSatisfied()

    {

        return this.spec1.IsSatisfied().Or(this.spec2.IsSatisfied());

    }

}

We’ll have to write the NotSpecification<T> ourselves, but this is not as involved as And and Or, even with the Entity Framework. We essentially take the body of the expression tree from the original specification and negate the result. Using the patterns you can read about in Colin’s blog post, we can use the following class as our NotSpecification<T>.

public class NotSpecification<T> : Specification<T>

{

    private Specification<T> originalSpec;

 

    public NotSpecification(Specification<T> originalSpec)

    {

        this.originalSpec = originalSpec;

    }

 

    public override Expression<Func<T, bool>> IsSatisfied()

    {

        Expression<Func<T, bool>> originalTree = this.originalSpec.IsSatisfied();

        return Expression.Lambda<Func<T, bool>>(

            Expression.Not(originalTree.Body),

            originalTree.Parameters.Single()

        );

    }

}

This is all well and good, but doesn’t this tie my domain to my infrastructure?

I think you can find arguments for both viewpoints. The specification pattern allows you to encapsulate a predicate to determine whether an object matches a condition. My opinion is whether that predicate is exposed as a method or an expression tree, the intent is preserved and there is one place where the criteria for a specification are checked. It does require you to use infrastructure that can utilize expression trees, but I would say that there is nothing about expression trees that tie them to the infrastructure layer directly. The details of the underlying data store have not leaked into the domain layer. If I had a provider that could use expression trees for XML or an object database store, then my domain layer would not change.

I enjoy learning about DDD and what other folks have done in this area. I’d love to hear your feedback.

Type Transparency in .NET 4 – #11

Up to this point I have focused on transparency with regards to .NET methods, but you can utilize the transparency attributes on types as well. They basically imply the same layering as they do when applied to methods, but there are some interesting invariants that the CLR will enforce with regards to type transparency.

There are two attributes of interest, the System.Security.SecuritySafeCriticalAttribute and the System.Security.SecurityCriticalAttribute. If you remember from the last tip, transparent code can only call critical code through safe critical code. So what does it mean for a type to safe critical or critical?

In most cases, it means that every member—this includes methods, fields, property getters and setters, nested classes, and delegates—inherits the annotation. Have a look at the class below.

[SecurityCritical]

public class Foo

{

    public static int Bar;

 

    public static class Bar

    {

        public static void Exec() { }

    }

 

    public Foo()

    {

    }

 

    public void Baz()

    {

    }

}

The Foo class is marked SecurityCritical, which means that transparent code cannot do the following:

  • Instantiate a new Foo.
  • Access the static Bar field.
  • Call the Exec method on the nested Bar class.
  • Call the Baz method.
  • Use reflection to call any of the above.

So even though the fields, methods, and nested classes aren’t explicitly marked security critical, the attribute on the class forces the critical behavior to flow down to all its members.

When you start mixing transparency and inheritance, it gets a bit tricky. There are some simple rules you can learn to help.

1. Derived types must be at least as restrictive as their base types.

If I decide to extend Foo with a FooBar class, then it must be marked with the SecurityCriticalAttribute if you want to use the class. Otherwise, when the JIT compiler encounters code that instantiates or uses FooBar, it will throw a TypeLoadException. In other words, Main will not even execute here:

public class FooBar : Foo

{

}

 

static void Main(string[] args)

{

    new FooBar();

}

Here is a list of the allowed combinations of base types and derived types.

Base Type Derived Type
Transparent Transparent
Transparent Safe Critical
Transparent Critical
Safe Critical Safe Critical
Safe Critical Critical
Critical Critical

 

2. Overridden methods must be as restrictive as the base method.

This means that when you override a Critical method, your method must also be marked Critical. However, Transparent and Safe Critical are considered as the same restriction from this rule’s point-of-view, so I can have a Transparent override of a Safe Critical method, and vice versa, without problems.

What, then, is the problem with this code?

[SecurityCritical]

public class RemotableObject : MarshalByRefObject

{

    public override object InitializeLifetimeService()

    {

        return base.InitializeLifetimeService();

    }

}

In .NET 4 the MarshalByRefObject.InitializeLifetimeService method is Critical, but we also established earlier in this post that if you mark a type as Critical, then every member inside of it is also Critical, right?

Well, I said "in most cases." This is the exception to the rule. From there we come to the last rule.

3. Overridden methods are always Transparent by default.

The problem above, then, can be remedied by marking InitializeLifeTimeService with the SecurityCriticalAttribute explicitly.

And that’s it for type transparency!

An Introduction to Security Transparency in .NET 4 – #10

Last week I covered security transparency in CLR 2.0 by looking at topics like how transparency can reduce your security footprint, using transparency in CLR 2.0, and transparent code behavior in CLR 2.0.

As you may have noticed, the transparency story changes in .NET 4. It would be too much to write about everything that has changed, so I’ll address the high-level points in this post and build on that foundation in future posts.

In the second version of the CLR, which includes .NET 2.0 to .NET 3.5 SP1, transparency’s goal was to separate code into layers to reduce time needed for security audits. The rationale was most code in an assembly is transparent and thus doesn’t require a lot of attention because it doesn’t do anything interesting from the point-of-view of security (like call unmanaged code or unverifiable code). The critical code is what requires careful scrutiny.

.NET 4 has improved security transparency by making it a full-fledged enforcement mechanism for these invariants. Consider one of the differences between the models. Transparent code in CLR 2.0 can still call unmanaged code (through P/Invoke, COM Interop) if it has UnmanagedCode permissions. However, since native code isn’t governed by the permission set of the AppDomain, this is a potentially dangerous operation. This means that you still had to audit transparent code in CLR 2.0 in case it called unmanaged code. In CLR 4.0, an Exception is thrown when transparent code attempts to call native code, regardless of its grant set.

Transparent code still can’t assert for permissions, and it still can’t satisfy a demand for permissions. One change, though, is that in CLR 2.0, LinkDemands were converted to full Demands if a transparent method called a method with that LinkDemand. In CLR 4.0, transparent code cannot satisfy a LinkDemand, and an Exception is thrown.

Another means by which the enforcement is improved is the emergence of a more rigid boundary between transparent code and critical code. In CLR 2.0, transparent code in assembly Foo can call public critical code in assembly Bar. In CLR 4.0, again, an Exception is thrown. The transparency rules are now fully enforced across assembly boundaries. Transparent code cannot call any critical code directly. End of story.

Security Transparency LayeringIn order for transparent code to call critical code now, it must call it via a method that is marked with the System.Security.SecuritySafeCriticalAttribute. This essentially replaces the need for the SecurityTreatAsSafeAttribute (which I discussed in the CLR 2.0 transparency post). You can think of safe critical code as a gateway for transparent code to call critical code. The restriction is only one way, however—that is, security critical code can call transparent code without problems.

There is so much more to cover with regards to transparency in .NET 4 that I think this is a good stopping point for today. If you can’t wait for more information, you can read the documentation as well as watch a Channel 9 interview of Shawn Farkas, a Senior SDE on the CLR security team, where he digs into the new security rules in .NET 4. Enjoy!

Hosting Conditional APTCA Assemblies – #9

Last Friday I discussed how to host a partial trust sandbox, and yesterday I touched on conditional APTCA in .NET 4. By the title of this post, it’s probably no surprise that we’re going to combine the two concepts and examine how to allow partially trusted code to call a conditional APTCA assembly.

Creating a sandbox requires an instance of the AppDomainSetup class. This class has a new property in .NET 4 called PartialTrustVisibleAssemblies. This is a string array where each value in the array contains an assembly’s simple name along with its public key (not the public key token!). Let’s look at an example.

Here I have a simple console application that attempts to create a new HttpCookie in partial trust.

public class Program : MarshalByRefObject

{

    static void Main(string[] args)

    {

        RunInPartialTrust();

    }

    private static void RunInPartialTrust()

    {

        AppDomainSetup setup = new AppDomainSetup

        {

            ApplicationBase = Environment.CurrentDirectory

        };

 

        PermissionSet grantSet = new PermissionSet(null);

        grantSet.AddPermission(new SecurityPermission(SecurityPermissionFlag.AllFlags));

        AppDomain domain = AppDomain.CreateDomain("PT Sandbox", null, setup, grantSet);

 

        Program p = (Program)domain.CreateInstanceAndUnwrap(

            Assembly.GetExecutingAssembly().FullName,

            typeof(Program).FullName

        );

 

        p.PartialTrustMain();

    }

    public void PartialTrustMain()

    {

        // Oops…

        HttpCookie cookie = new HttpCookie("Foo");

    }

}

(If you need a refresher on what the code in the RunInPartialTrust method does, check out my previous tip on hosting partial trust sandboxes.)

The System.Web.HttpCookie class is in the System.Web assembly, which is marked as conditional APTCA in .NET 4. Because we haven’t done anything special in our hosting code, calling the HttpCookie constructor throws an all too familiar SecurityException…

SecurityException: That assembly does not allow partially trusted callers.

We need to modify our AppDomain setup code to allow this call to work. For this we’ll need the name and public key of the assembly. The name is “System.Web,” but what’s the public key?

If you’re like me and don’t memorize public keys, you’ll need some help here. Remember your trusty friend sn.exe, the Strong Name Tool? It has a useful function that allows you to extract a public key from a strong named assembly.

sn.exe –Tp <assembly>

Public Key - System.Web

Taking this information and modifying the AppDomainSetup instance yields this small change.

AppDomainSetup setup = new AppDomainSetup

{

    ApplicationBase = Environment.CurrentDirectory,

    PartialTrustVisibleAssemblies = new string[] { "System.Web, PublicKey=0024000004800000940000000602000000240000525341310004000001000100 etc." }

};

Note the presence of “PublicKey=” in the string. This must be present in order for partial-trust visible assembly registration to work. Also, don’t copy and paste this, as I obviously didn’t have room to paste the entire public key. :)

Re-running the application will allow the call to System.Web.HttpCookie’s constructor.

Interesting Tidbit: On the Entity Framework we ran into a bug where we called APIs in System.Web in partial trust where the host was a XAML Browser Application (XBAP), not ASP.NET. The SecurityException above was thrown, and now you know why! So be careful if you are calling into framework code like System.Web from partial trust, and the host is not the common one.

In these situations it might be useful for you to check which conditional APTCA assemblies can be called from partial trust. You can do this by reading the PartialTrustVisibleAssemblies property of the current AppDomain through the following string of property calls.

AppDomain.CurrentDomain.SetupInformation.PartialTrustVisibleAssemblies

image

Tomorrow we’ll move out of APTCA and partial trust hosting onto something new.

Conditional APTCA in .NET 4 – #8

The first item on this week’s security tips is about a new feature in .NET 4 called conditional APTCA. If you read my previous tip on the AllowPartiallyTrustedCallersAttribute (APTCA), you’ll know that you can decorate assemblies with this attribute in order to allow calls into that assembly’s public API from partial trust.

.NET 4 advances the capabilities of APTCA to reflect the decision to give control of permissions to hosts instead of machine-wide policy. Assemblies can now specify whether they allow partially trusted callers based on whether the host allows it. ASP.NET is a good example of why this feature is useful. In .NET 4, System.Web.dll is marked conditionally APTCA, because it accepts calls from partially trusted code only if the host is ASP.NET itself. If the host is a ClickOnce application or Internet Explorer in the case of a control hosted by the browser, then partially trusted code cannot call into the System.Web assembly.

This doesn’t mean that an assembly can choose which hosts can allow partially trusted code to call it, only that the host must explicitly give access for partially trusted code to call that assembly. This means that as an application developer, I can create my own host that allows code from partial trust to call System.Web.dll. We’ll cover this in tomorrow’s tip.

In order to mark your assembly conditionally APTCA, set the attribute’s PartialTrustVisibilityLevel property to PartialTrustVisibilityLevel.NotVisibleByDefault.

[assembly: AllowPartiallyTrustedCallers(PartialTrustVisibilityLevel = PartialTrustVisibilityLevel.NotVisibleByDefault)]

Next time we’ll talk about how to setup `a host to enable partially trusted code to call conditional APTCA assemblies.

How to Host a Partial Trust Sandbox – #7

In previous tips, I referenced some APIs that allowed me to run code in partial trust, and we’ll finally cover that code today, as well as some API changes made in .NET 4 to make it easier to set up the sandbox.

Where We’ve Come From

In .NET 1.1 and below, the only way to control trust levels was through CAS Policy, which was a powerful but very complex system for managing which permissions apply to given assemblies loaded in your application. The gist is that there are multiple levels of policy—Enterprise, Machine, User, and AppDomain—each with code groups and membership conditions for those code groups. Each code group specified a permission set, and the membership conditions specified which assemblies were classified in a given code group, based on its evidence, like its Zone, StrongName, Url, etc. Since an assembly can belong to multiple code groups, the permissions for an assembly were unioned across all code groups within a policy level, and then intersected across policy levels. But wait, there’s more! You can specify any policy level to be a "final" level or an "exclusive" level, which affects how the permissions are intersected…

If you’re feeling confused, then don’t worry. You’re not alone. If you want a more thorough discussion of CAS Policy, you can Google it. With CAS Policy’s deprecation and the subsequent focus on hosts to provide permissions instead of policy, the focus of this post is on the host, not CAS policy.

Where We Are

In .NET 2.0, the CLR team introduced new APIs which allow code to create a partial trust sandbox, where only the permissions that the host requests are granted to the code running within the sandbox. These sandboxes are actually homogeneous AppDomains, where every piece of code running in the assembly is subject to one of two permission grant sets:

  1. Full Trust
  2. The grant set of the AppDomain.

Assemblies will be full trust if they either (1) are loaded from the GAC or (2) appear on the AppDomain’s list of trusted assemblies. Here’s the method behind the sandboxing magic.

public static AppDomain CreateDomain(

    string friendlyName,

    Evidence securityInfo,

    AppDomainSetup info,

    PermissionSet grantSet,

    params StrongName[] fullTrustAssemblies

)

The interesting parameters are the PermissionSet and the array of StrongName instances that are considered full trust in the sandbox. The CLR will enforce that the sandbox has only the permissions of the PermissionSet passed to this method. The set of StrongNames that you can supply describes assemblies which the AppDomain will treat as full trust. You may wonder what it means to be a full trust assembly when demands for permissions traverse the entire call stack in an AppDomain; essentially, full trust assemblies are allowed to elevate their permissions using asserts and they can satisfy LinkDemands for permissions you don’t normally have in the AppDomain.

Let’s look at an example use of the AppDomain.CreateDomain sandbox method.

static void RunInPartialTrust()

{

    AppDomainSetup setup = new AppDomainSetup

    {

        ApplicationBase = Environment.CurrentDirectory

    };

 

    PermissionSet permissions = new PermissionSet(null);

    permissions.AddPermission(new SecurityPermission(SecurityPermissionFlag.Execution));

    permissions.AddPermission(new ReflectionPermission(ReflectionPermissionFlag.RestrictedMemberAccess));

    AppDomain appDomain = AppDomain.CreateDomain(

        "Partial Trust AppDomain",

        null,

        setup,

        permissions

    );

 

    Program p = (Program)appDomain.CreateInstanceAndUnwrap(

        typeof(Program).Assembly.FullName,

        typeof(Program).FullName

    );

 

    p.PartialTrustMain();

}

The setup process is simple. Creating the PermissionSet requires a few lines of code where you explicitly supply which permissions you want for the sandbox, and creating the AppDomainSetup object is also trivial. From there, create the AppDomain with the CreateDomain method, instantiate a new object in that AppDomain, and call a method on it. As soon as you call that method, your code will transition from the default AppDomain to the new sandboxed domain. Note that the class you instantiate should inherit from ‘>MarshalByRefObject in order for it to be marshalled across AppDomain boundaries. (In the example above, the Program class inherits from MarshalByRefObject.)

You may wonder why I pass null for the Evidence parameter. Most APIs in .NET 4 that expose an Evidence parameter are deprecated because those methods typically interact with CAS policy to achieve their objectives. However, passing null is the same as passing the Evidence of the current (full-trust) AppDomain, which means it will not affect the sandbox. In fact, based on what I see in Reflector, if you pass in custom evidence, it will be ignored.

The more interesting use of the sandbox API arises when you need full trust assemblies in your new sandbox, but they don’t live in the GAC. Here you can use an improved version of the Evidence API exposed in .NET 4 to retrieve the StrongName instance from a given assembly. (Yes, that’s right. In order to be a full trust assembly in a sandbox, the assembly must be strong named.)

StrongName foo = typeof(Foo).Assembly.Evidence.GetHostEvidence<StrongName>();

 

If this were .NET 3.5, you would have to do this, so I’m sure you can appreciate the brevity of the new API.

StrongName sn;

IEnumerator enumerator = typeof(Foo).Assembly.Evidence.GetHostEnumerator();

while (enumerator.MoveNext())

{

    sn = enumerator.Current as StrongName;

    if (sn != null)

    {

        break;

    }

}

 

After you aggregate all of the StrongName instances that you need, pass them as the last parameter of AppDomain.CreateDomain to treat the assemblies identified by those StrongNames as full trust. Afterwards your sandbox is up and running, and you can start playing with partially trusted code.

The AllowPartiallyTrustedCallersAttribute (APTCA) – #6

I decided for today’s tip to review a concept that may be familiar to a lot of you already, because it provides some backstory before I jump into more .NET 4 security topics.

The System.Security.AllowPartiallyTrustedCallersAttribute (APTCA, for short) is used to expose strong-named assemblies to partially trusted callers. If you don’t need to expose your library to partially trusted callers (examples include a web application running in partial trust or a WPF application running in a ClickOnce sandbox), then you shouldn’t worry about this attribute. Applying it to your assembly means you should audit your assembly very carefully to ensure that partially trusted callers can’t elevate their privileges.

The only time partially trusted code can call a strong-named assembly is when the strong-named assembly has APTCA applied. (Partially trusted code can always call assemblies without strong names.) An assembly is considered strong-named when it’s signed with a public/private key pair. You can do this through the Signing tab on the project properties in Visual Studio or by using the compiler directly. The picture below shows the option in a C# project in VS2008. (VB projects also feature the Signing tab with similar contents.)

Signing Tab in VS

You can also delay sign the assembly, as I’ve done above. Test-signed assemblies are also considered strong-named.

Once an assembly is strong-named, all of its public APIs will be protected with a LinkDemand for FullTrust, implying that direct callers of the public API must be fully trusted unless they want to face a hard failure. Let’s look at a concrete example of code that needs APTCA to run successfully.

Driver.exe

public class Program

{

    public static void Main(string[] args)

    {

        RunInPartialTrust();

    }

 

    public void PartialTrustMain()

    {

        Bar b = new Bar();

        b.Execute();

    }

}

StrongNamedLibrary.dll – A strong-named assembly

public class Bar

{

    public void Execute()

    {

        new Baz().ExecuteCore();

    }

 

    private class Baz

    {

        public void ExecuteCore()

        {

            Console.WriteLine("Hello world!");

        }

    }

}

When Driver.exe executes, the Main method sets up a partial trust sandbox (which I’ll cover in tomorrow’s tip) that calls PartialTrustMain. Having a sandbox essentially means that starting from PartialTrustMain, the code is partially trusted, which basically means it is running at a trust level other than full trust. If you’re not familiar with this, don’t get hung up on the details; in this application, PartialTrustMain is partially trusted code.

Immediately PartialTrustMain tries to instantiate the Bar class in StrongNamedLibrary.dll, we get a SecurityException:

APTCA SecurityException

Applying APTCA to StrongNamedLibrary will allow Driver.exe to call the library and output text to the console.

[assembly: System.Security.AllowPartiallyTrustedCallers]

 

Successful Output

So this is the simple case. Let’s look at some others…

What if Driver.exe is strong-named?

It doesn’t matter. Strong-named does not equal full trust, so you would see the same SecurityException. Otherwise, you could create a new strong-named assembly that calls StrongNamedLibrary.dll and call that assembly from Driver.exe and effectively circumvent the LinkDemand for FullTrust.

In fact, using the sandbox API (which, again, I’ll talk about in a later tip), strong-naming Driver.exe would cause the failure to occur because the partial trust code tries to create a new instance of the Program class so it can call the PartialTrustMain method. This will fail unless Driver.exe is itself marked with APTCA as well.

What if I use reflection?

Good question. There are two cases here. First, let’s try creating an instance of Bar through reflection.

public void PartialTrustMain()

{

    typeof(Bar).GetConstructor(Type.EmptyTypes).Invoke(null);

}

Same SecurityException, different stack trace.

APTCA SecurityException with Reflection

Second, let’s try creating a new instance of Baz and calling its ExecuteCore method.

public void PartialTrustMain()

{

    Type bazType = typeof(Bar).GetNestedType("Baz", BindingFlags.NonPublic);

    object baz = bazType.GetConstructor(Type.EmptyTypes).Invoke(null);

    bazType.GetMethod("ExecuteCore").Invoke(baz, null);

}

Does the code succeed or fail?

It succeeds, and "Hello world!" is written to the console. Remember the LinkDemand for FullTrust is placed only on public APIs! The Baz type is private, and that implies its ExecuteCore method is also private (at least, it is not exposed externally).

The mitigating factor here is that the partially trusted code must have enough reflection permissions to make this call. (We’ll talk more about these reflection permissions in a future tip.)

Do you have any other questions about APTCA? Leave a comment!

Opting Out of Security Changes in .NET 4 – #5

I decided to provide another tip today since .NET 4 Beta 1 was released! I definitely like the changes that the security team has made to make permissions easier to understand and to improve enforcement of transparency, but there are breaking changes here that require work you may not be ready for. If you need to revert to the old behavior (e.g. using CAS policy, CLR 2.0 transparency, or the old SecurityActions) in order to prepare for migration, then take a look below.

To enable legacy CAS policy, support for the obsolete SecurityActions, and anything else that can make AppDomains heterogeneous, add the NetFx40_LegacySecurityPolicy element to the runtime element of your configuration file. This will enable the legacy behavior only for the application for which you make the configuration change. 

<configuration>

  <runtime>

    <NetFx40_LegacySecurityPolicy enabled="true" />

  </runtime>

</configuration>

 

To revert to CLR 2.0 transparency, add the System.Security.SecurityRulesAttribute to your assembly and specify the Level1 SecurityRuleSet. (Level1 = CLR 2.0, Level2 = CLR 4.0)

[assembly: SecurityRules(SecurityRuleSet.Level1)]

 

Update June 8, 2009: The configuration switch for enabling legacy CAS policy under .NET 4 Beta 2 has changed to NetFx40_LegacySecurityPolicy, and I’ve updated the post above. In case you are using .NET 4 Beta 1, the switch is legacyCasPolicy, as shown below.

<configuration>

  <runtime>

    <legacyCasPolicy enabled="true" />

  </runtime>

</configuration>

What’s New With Security in .NET 4? – #4

A lot.

A whole lot.

The main reason I started this series was because of the vast amount of changes coming in security in the latest release of the .NET Framework. Now that .NET 4 is publically available, I want to call attention to these changes. In future tips, I’ll address them in more detail, but for now there are three big things (IMO) you should be aware of. (The documentation lists more but they are minor compared to the first three.)

CAS Policy is DEPRECATED and DISABLED by Default

With the release of .NET 4, the CLR starts the move away from machine-wide policy enforcement. This means no more code groups and membership conditions to deal with in caspol.exe or the .NET configuration tool; no more considering Enterprise, Machine, and User permissions; and no more considering how LevelFinal and Exclusive throw a monkey wrench in determining which assemblies get which permissions.

So this means that everything runs in full trust unless the host specifies otherwise. The CLR now gives full control to the host to create AppDomains that sandbox code into using particular sets of permissions. Examples of hosts are ASP.NET, Internet Explorer, and ClickOnce, where people are already used to sandboxing their applications (e.g. medium trust in ASP.NET).

As a result, all AppDomains are now homogeneous, which is just a fancy way of saying that all assemblies running in that AppDomain have one of two different permission grant sets—the grant set of the AppDomain (default) or full trust (assemblies in GAC or assemblies in AppDomain’s full trust list).

Security Transparency, Level 2

All of the previous posts I’ve done on transparency thus far have focused on transparency in the second version of the CLR. If you are using transparency today, there are changes you need to be aware of when migrating your application to .NET 4. While the intent for transparency has not changed (to isolate different groups of code based on privilege), the CLR has stepped up its enforcement based on how Silverlight implemented transparency.

I’ll cover the new rules in a later post.

Support Removed For SecurityActions: Deny, RequestMinimum, RequestOptional, RequestRefuse

In the second version of the CLR you could place permission requests as attributes on your assemblies. If CAS policy evaluated that a particular assembly couldn’t the permissions it requested, then it would fail to load in the application. If the application itself couldn’t receive its requested permissions, then it would fail to start completely.

In .NET 4, support for these assembly-wide permission attributes has been removed for various reasons, the main one being that they contravene the push to make permissions simpler to understand and evaluate. Remember, AppDomains are now homogeneous, so specific assemblies cannot control their own permissions. That power belongs in the hands of the host.

SecurityAction.Deny was removed because it could easily be overridden by using an assert, thus opening a security hole.

 

In future tips I’ll look deeper at each of these areas. Be sure to check out and give feedback on the beta security documentation as well!

VS 2010 and .NET 4 Beta 1 Released

The first beta of VS 2010 and .NET 4 is now publically available! For download options, see Jonathan Wells’ blog.

The product page has a list of resources that you can look at to see some of bigger changes. Also check out the portal over at MSDN and Jason Zander’s post on new features in the release.

You should see blog posts coming soon from the ADO.NET team on what to expect in LINQ to SQL and the Entity Framework in the beta as well.

If you’re interested in security, you should also watch the .NET Security Blog for posts about security changes in .NET 4. I am sure that Shawn will write some in-depth articles about them, but I’ll continue my series of tips that cover the changes that I’ve experienced as an internal partner over in the Entity Framework team. You can find the series here and get the feed here.

Technorati Tags: ,