As has been mentioned already, the CLR is, like the Java virtual machine, a runtime environment that takes charge of resource management tasks (memory allocation and garbage collection) and ensures the necessary abstraction between the application and the underlying operating system.
In order to provide a stable platform, with the aim of reaching the level of reliability required by the transactional applications of e-business, the CLR also fulfills related tasks such as monitoring of program execution. In DotNet-speak, this involves "managed" code for the programs running under CLR monitoring, and "unmanaged" code for applications or components which run in native mode, outside the CLR.
The CLR watches for the traditional programming errors that for years have been at the root of the majority of software faults: access to elements of an array outside limits, access to non-allocated memory zones, memory overwritten due to exceeded sizes.
This monitoring of the execution of managed codes comes at a price, however. Although it is currently impossible, from performances returned by current Beta-test versions, to quantify the overhead incurred by application monitoring, we can expect performance to slip by at least 10%, as Microsoft admits. Of course, we might ask whether a 10% reduction in performance is such a bad thing if it leads to new levels of reliability and availability...
As Moore's law is always borne out when it comes to increasing processor performance, how long must we wait before we have servers which are 10% more powerful?