http://www.informit.com/library/content.aspx?b=Net_2003_21days&seqNum=9
Day 1. Introduction to the Microsoft .NET Framework
This week you learn about the tools that Visual Studio .NET offers. You'll get a good understanding of what types of applications you can create using Visual Studio .NET. But before we get to that point, you must have a solid understanding of what the .NET Framework is and what it can do for you. Understanding the internals of the .NET Framework helps you better understand what's happening when Visual Studio .NET is helping you create applications. By the end of the day, you'll have better insight into the technologies that make up the .NET Framework, how the .NET Framework fits into Microsoft's vision of the future of computing, and how things are much different from the past.
What Is .NET?
When .NET was announced in late 1999, Microsoft positioned the technology as a platform for building and consuming Extensible Markup Language (XML) Web services. XML Web services allow any type of application, be it a Windows- or browser-based application running on any type of computer system, to consume data from any type of server over the Internet. The reason this idea is so great is the way in which the XML messages are transferred: over established standard protocols that exist today. Using protocols such as SOAP, HTTP, and SMTP, XML Web services make it possible to expose data over the wire with little or no modifications to your existing code.
Figure 1.1 presents a high-level overview of the .NET Framework and how XML Web services are positioned.
Figure 1.1 Stateless XML Web services model.
Since the initial announcement of the .NET Framework, it's taken on many new and different meanings to different people. To a developer, .NET means a great environment for creating robust distributed applications. To an IT manager, .NET means simpler deployment of applications to end users, tighter security, and simpler management. To a CTO or CIO, .NET means happier developers using state-of-the-art development technologies and a smaller bottom line. To understand why all these statements are true, you need to get a grip on what the .NET Framework consists of, and how it's truly a revolutionary step forward for application architecture, development, and deployment.
Windows of the Past
In the past, millions of applications were developed for Windows-based systems using a variety of development tools and languages. Visual Basic, C++, Delphi, Java, and Access provided a great toolset that enabled you to write applications for Windows. The problem that crept up again and again was how these applications communicated with each other and how they could communicate with data beyond the departmental server. Because each language has its own runtime environment, they all run essentially inside their own box, using their own way to communicate with core system services. There was no way to get outside the box. When a new feature to a language had to be added, it would be bolted somewhere on to the runtime environment through a new set of API calls. If you wanted to access the new features, each language had its own way of doing so. And, as was the case with Visual Basic, many features were simply not available because the runtime environment of Visual Basic couldn't support them. This problem seemed to have been solved with the Windows Distributed Internet Applications (DNA) architecture, which was based on Component Object Model (COM) components moving data between different types of distributed applications.
Windows DNA and COM
Writing distributed Internet applications became easier as the model of COM services that Windows servers could provide became more stable and widespread. You could write an Active Server Pages (ASP) application and access methods, properties, and events through the object model of components running inside of COM+ services on remote machines. Figure 1.2 shows the flow of a DNA/COM application.
Figure 1.2 DNA and COM in action.
Windows DNA became more accepted because of the ease with which a Visual Basic 6 developer could write components that could be accessed from any other type of application, as long as he had access to the Windows 2000 server that the COM+ services were running on. This is where the problems begin.
If you provide data to outside vendors in an application, you must write the user interface and code to allow them access to what they need. There's no simple way to expose methods, or allow other applications to call methods, on your servers. You have to open up security and give them the keys to the farm, which isn't what IT managers are likely to do. If you want to maintain multiple versions of a component, you are in a very bad way with COM. Because COM makes heavy use of the Registry and doesn't allow for a simple versioning policy, you're essentially maintaining the same component forever. You're constantly adding new features to it, but leaving the old stuff in. This is one of the big rules of COM: Thou shall not change any interfaces to your components. Doing so makes for a huge headache in deployment. If you change an in or out parameter on a method, you've broken the functionality of the component. That means all the components must be recompiled to restore the correct interfaces that the caller expects. After a component is deployed, how do you easily scale it across machines while maintaining any state data that the code expects? This isn't a trivial problem, and companies have spent millions of dollars writing state machines to handle the scalability problems that come with COM.
All these issues are solved with the .NET Framework. The services provided by the .NET Framework enable us to expose methods over HTTP and SOAP through XML Web services. The Windows Registry is not used in .NET, which eliminates the DLLHell of the past and gives us a strong versioning policy. There are many ways to maintain state data, so we can scale applications across processors and across servers with no worry about crashing the applications running on those servers. This all starts with the common language runtime and the base class libraries.
On Day 13, "XML Web Services in .NET," you learn more about the different protocols and messaging standards that make up anXML Web service.
The Common Language Runtime
At the heart of the .NET Framework is the common language runtime. The common language runtime is responsible for providing the execution environment that code written in a .NET language runs under. The common language runtime can be compared to the Visual Basic 6 runtime, except that the common language runtime is designed to handle all .NET languages, not just one, as the Visual Basic 6 runtime did for Visual Basic 6. The following list describes some of the benefits the common language runtime gives you:
• Automatic memory management
• Cross-language debugging
• Cross-language exception handling
• Full support for component versioning
• Access to legacy COM components
• XCOPY deployment
• Robust security model
You might expect all those features, but this has never been possible using Microsoft development tools. Figure 1.3 shows where the common language runtime fits into the .NET Framework.
Figure 1.3 The common language runtime and the .NET Framework.
Code written using a .NET language is known as managed code. Code that uses anything but the common language runtime is known as unmanaged code. The common language runtime provides a managed execution environment for .NET code, whereas the individual runtimes of non-.NET languages provide an unmanaged execution environment.
Inside the Common Language Runtime
The commonlanguage runtime enables code running in its execution environment to have features such as security, versioning, memory management, and exception handling because of the way .NET code actually executes. When you compiled Visual Basic 6 forms applications, you had the ability to compile down to native node or p-code. Figure 1.4 should refresh your memory of what the Visual Basic 6 options dialog looked like.
Figure 1.4 Visual Basic 6 compiler options dialog.
When you compile your applications in .NET, you aren't creating anything in native code. When you compile in .NET, you're converting your code—no matter what .NET language you're using—into an assembly made up of an intermediate language called Microsoft Intermediate Language (MSIL or just IL, for short). The IL contains all the information about your application, including methods, properties, events, types, exceptions, security objects, and so on, and it also includes metadata about what types in your code can or cannot be exposed to other applications. This was called a type library in Visual Basic 6 or an IDL (interface definition language) file in C++. In .NET, it's simply the metadata that the IL contains about your assembly.
The file format for the IL is known as PE (portable executable) format, which is a standard format for processor-specific execution.
When a user or another component executes your code, a process occurs called just-in-time (JIT) compilation, and it's at this point that the IL is converted into the specific machine language of the processor it's executing on. This makes it very easy to port a .NET application to any type of operating system on any type of processor because the IL is simply waiting to be consumed by a JIT compiler.
The first time an assembly is called in .NET, the JIT process occurs. Subsequent calls don't re-JIT the IL; the previously JITted IL remains in cache and is used over and over again. On Day 5, "Writing ASP.NET Applications," you learn more about the JITting process and how it can affect your ASP.NET applications. On Day 19, "Understanding Microsoft Application Center Test," when you learn about Application Center Test, you also see how the warm-up time of the JIT process can affect application performance.
Understanding the process of compilation in .NET is very important because it makes clear how features such as cross-language debugging and exception handling are possible. You're not actually compiling to any machine-specific code—you're simply compiling down to an intermediate language that's the same for all .NET languages. The IL produced by J# .NET and C# looks just like the IL created by the Visual Basic .NET compiler. These instructions are the same, only how you type them in Visual Studio .NET is different, and the power of the common language runtime is apparent.
When the IL code is JITted into machine-specific language, it does so on an as-needed basis. If your assembly is 10MB and the user is only using a fraction of that 10MB, only the required IL and its dependencies are compiled to machine language. This makes for a very efficient execution process. But during this execution, how does the common language runtime make sure that the IL is correct? Because the compiler for each language creates its own IL, there must be a process that makes sure what's compiling won't corrupt the system. The process that validates the IL is known as verification. Figure 1.5 demonstrates the process the IL goes through before the code actually executes.
Figure 1.5 The JIT process and verification.
When code is JIT compiled, the common language runtime checks to make sure that the IL is correct. The rules that the common language runtime uses for verification are set forth in the Common Language Specification (CLS) and the Common Type System (CTS).
Understanding the Common Language Specification
The CLS describes the concrete guidelines that make a .NET language compliant with the common language runtime. That doesn't mean a .NET language can't have language-specific features, but it does indicate that to be considered a .NET language, the language must comply with the set of requirements set forth in the CLS. All features added to a .NET language and that aren't part of the CLS won't be exposed to other .NET languages at runtime.
If your code is fully CLS compliant, it's guaranteed to interoperate with all other components written in any .NET language. Certain languages, such as C#, attempt to accommodate developers moving from C and C++ with the similarity of their syntaxes. Because C# attracts such developers, it includes functionality familiar from their native languages, such as pointers and code access to unsafe memory blocks. This functionality is not CLS compliant and won't be accessible by other .NET languages, but it's allowed by the common language runtime and the language-specific compilers. To make sure that your code is CLScompliant, compilers such as C# include checks for non-CLS-compliant code through the use of attributes. If you apply theCLSCompliantAttribute attribute to a class or method in your code and the code isn't CLS compliant, an error occurs and the compile fails. The following code demonstrates how to apply the CLSCompliantAttribute attribute in your code:
using System;
[assembly: CLSCompliantAttribute(true)]
[CLSCompliantAttribute(true)]
public class Class1
{
public void x(UInt32 x){}
public static void Main( )
{
}
}
In this case, the code won't compile because unsigned integers aren't part of the CLS.
The second part of the verification process that the JIT compiler goes through to make sure that your code executes correctly is the verification of types. All types used in .NET must conform to the CTS.
Understanding the Common Type System
The CTS sets forth the guidelines for data type safety in .NET.
In the past, there were no rules for type safety across execution runtimes, hence the general protection fault (GPF) and blue screen of death errors that could occur when running applications. The culprit behind those meltdowns was the overlapping of memory by data types. This was a common occurrence in Windows 3.1, Windows 95, and Windows 98. When a Visual Basic developer deployed a new application, fingers had to be crossed to make sure that the data types and memory access between the newly installed DLLs and the existing ones on the system mingled happily. Most of the time they did, but when they didn't, errors occurred.
In .NET, the CTS defines types and how they can act within the bounds of the common language runtime. There are two type classifications in .NET: value types and reference types.
Value Types
Value types directly contain the data you assign them. They're built into the common language runtime and derive directly from the base System.Object type. Examples of value types are primitive types, structures, and enumerations. Primitive types can be further broken down into numbers, such as Boolean, byte, short, integer, long, single, double, decimal, date, and char.
Reference Types
Reference types don't directly contain any data; rather, they point to a memory location that contains the actual data. Reference types are built into the common language runtime and derive directly from the base System.Object type. Some examples of reference types are strings, classes, arrays, delegates, and modules (see Figure 1.6).
Figure 1.6 The common type system defined.
To make the difference between Value types and Reference types clearer, consider the following code. It accesses a primitive type (which is a value type) and a class (which is a reference type), and attempts to assign values to them.
using System;
namespace cSharp_ValueReference
{
class Class1
{
static public int x;
[STAThread]
static void Main(string[] args)
{
x=4;
int y;
y = x;
x=0;
// Since each Value type contains its own data,
// modifying the variable X after setting Y to the value
// of X does not affect either variable
Console.WriteLine(x);
Console.WriteLine(y);
// Create an instance of Class2
Class2 ref1 = new Class2();
// Set the refValue of this instance to 5
ref1.refValue=5;
// Create an object reference to the ref1 class
Class2 ref2 = ref1;
// Set the refValue of the object
ref2.refValue=10;
// Notice how the results are the same, even
// though you set re1.refValue to 5, the reference
// to this memory was overridden by the value of 10
Console.WriteLine(ref1.refValue);
Console.WriteLine(ref2.refValue);
Console.ReadLine();
}
}
class Class2
{
public int refValue;
}
}
Module Module1
Sub Main()
Dim X As Integer = 4
Dim Y As Integer
intY = X
intX = 0
Console.WriteLine(X)
Console.WriteLine(Y)
Dim ref1 As Class2 = New Class2()
ref1.refValue = 5
Dim ref2 As Class2 = ref1
ref2.refValue = 10
Console.WriteLine(ref1.refValue)
Console.WriteLine(ref2.refValue)
Console.ReadLine()
End Sub
End Module
Class Class2
Public refValue As Integer
End Class
In both examples, the values of the value type variables X and Y are 0 and 4, whereas the values of the reference types ref1 andref2 are both 10. Because the reference type points to the same memory allocation for the initial object ref1, the value for all variables set to an instance of that object is always the last value assigned. Figure 1.7 shows the console output of the code.
Figure 1.7 Value and reference type test output.
You don't normally get into much trouble when dealing with reference types and value types like the example describes. When you're creating instances of classes, always derive from a new instance of the object, not a previously set instance.
Now that you have an understanding of what the CTS is and how it works, you need to see how the types are removed from memory. Removing types that are no longer referenced in your applications is known as garbage collection.
Handling Memory and Garbage
The common language runtime handles all memory allocation and management that your application requires. This includes the initial allocation that occurs when you declare an object and store data in it, and the release of memory back to the operating system when the object is no longer in use. The automatic garbage collection of unused objects solves all the inherent problems of Win32-based applications when it comes to the mysterious resource losses that Windows would succumb to after running applications.
Memory management is improved with each new version of the Windows operating system, but the fault is not completely that of the operating system. If you're writing C++ applications, it's very easy to forget to destroy object handles and memory blocks after they've been used. When this happens, there's no way for Windows to recover that memory until the machine is rebooted. In Visual Basic 6, you had to set all your object instances toNothing to guarantee that memory would be freed after an object was used. The limitations for the runtime environments of all languages lead to the problems of resource loss in Windows. So, in the end, it isn't really the fault of Windows—it's the fault of the developers writing the code that runs in Windows.
The garbage collection mechanism used in .NET is very simple and can be summed up in the following steps:
1. The garbage collector (GC) allocates memory resources on the managed heap when a process starts and sets a pointer to that memory allocation.
2. Each object created for the allocated resource is given the next address space in the managed heap when it's created.
3. The GC continuously scans the managed heap for objects that are out of scope and no longer in use.
4. The GC reclaims stack heaps that it determines are out of scope and compacts the managed heap for the running process.
This four-step process occurs over and over during the execution lifetime of your application. Under the hood, the GC divides the managed heap running your processes into three generations. Each generation is examined separately by the GC based on when the objects on the heap were created and their dependency to each other. This mechanism improves the overall performance of garbage collection because constantly scanning the entire managed heap for unused resources would be processor-intensive and time-consuming. By splitting apart when and where objects are created, the process of garbage collection can effectively determine what objects are in use and what objects are out of scope.
Although the GC can handle the destruction of most objects on the managed heap, objects such as file handles, network handles, database connections, and window handles are unmanaged resources that are created on the managed heap. These resources can be given the correct memory allocation, and the GC knows when they are out of scope, but it doesn't know when to destroy those objects to reclaim the memory on the stack. To reclaim memory from unmanaged resources, you must explicitly destroy the objects by creating the necessary cleanup code to implement the IDisposable interface and override theDispose method of the object. This isn't always necessary, and should be used only if you know that a resource must be freed when your component is no longer being used.
If you're using an object and you know it's a CTS-compliant managed type, the automatic garbage collection handles reclaiming the resource. Haphazardly calling the Dispose method on objects consumes resources and forces garbage collection. When writing components that use unmanaged resources, you can close file handles and network handles in the Dispose method, and the normal process of garbage collection destroys the object and reclaims the memory allocation.
Because the common language runtime determines when garbage collection takes place, it's referred to as nondeterministic finalization. In other words, you have no idea when the finalize method, which marks an object for collection, will occur.
The reason that understanding the existence of Dispose method is important is because of an unlikely worst-case scenario in which object resources aren't freed and a component attempts to create them again. This situation could occur if the system running a component is depleting its resources and garbage collection isn't occurring on a regular basis. The following code demonstrates how to implement the Dispose method when creating a Windows User Control and implementing a database connection.
Imports System.Data.SqlClient
Public Class UserControl1
Inherits System.Windows.Forms.UserControl
Private cn As New SqlConnection()
Public Sub New()
MyBase.New()
cn.ConnectionString = "uid=sa;pwd=;database=pubs;server=."
cn.Open()
InitializeComponent()
End Sub
Protected Overloads Overrides Sub Dispose(ByVal disposing As Boolean)
If disposing Then
If Not (components Is Nothing) Then
components.Dispose()
cn.Close()
cn = Nothing
End If
End If
MyBase.Dispose(disposing)
End Sub
Private components As System.ComponentModel.IContainer
Private Sub InitializeComponent()
Me.Name = "UserControl1"
End Sub
End Class
using System;
using System.Collections;
using System.ComponentModel;
using System.Drawing;
using System.Data;
using System.Windows.Forms;
using System.Data.SqlClient;
namespace cSharpDispose
{
public class UserControl1 : System.Windows.Forms.UserControl
{
private System.ComponentModel.Container components = null;
private SqlConnection cn;
public UserControl1()
{
InitializeComponent();
cn.ConnectionString=
"database=pubs;server=localhost;uid=sa;pwd=";
cn.Open();
}
protected override void Dispose( bool disposing )
{
if( disposing )
{
if( components != null )
components.Dispose();
cn.Close();
cn=null;
}
base.Dispose( disposing );
}
private void InitializeComponent()
{
this.Name = "UserControl1";
this.Load += new
System.EventHandler(this.UserControl1_Load);
}
}
}
As you can see, implementing Dispose is a simple task. By default, any class that derives from System.ComponentModel.Componenthas a Dispose method that you can override. If you're writing a component that doesn't derive fromSystem.ComponentModel.Component, you can implement the IDisposable interface and create your own Dispose method.
The .NET Framework Class Library
The second most important piece of the .NET Framework is the .NET Framework class library (FCL). As you've seen, the common language runtime handles the dirty work of actually running the code you write. But to write the code, you need a foundation of available classes to access the resources of the operating system, database server, or file server. The FCL is made up of a hierarchy of namespaces that expose classes, structures, interfaces, enumerations, and delegates that give you access to these resources.
The namespaces are logically defined by functionality. For example, the System.Data namespace contains all the functionality available to accessing databases. This namespace is further broken down into System.Data.SqlClient, which exposes functionality specific to SQL Server, and System.Data.OleDb, which exposes specific functionality for accessing OLEDB data sources. The bounds of a namespace aren't necessarily defined by specific assemblies within the FCL; rather, they're focused on functionality and logical grouping. In total, there are more than 20,000 classes in the FCL, all logically grouped in a hierarchical manner. Figure 1.8 shows where the FCL fits into the .NET Framework and the logical grouping of namespaces.
Figure 1.8 The .NET Framework class library.
To use an FCL class in your application, you use the Imports statement in Visual Basic .NET or the using statement in C#. When you reference a namespace in Visual Basic .NET or C#, you also get the convenience of auto-complete and auto-list members when you access the objects' types using Visual Studio .NET. This makes it very easy to determine what types are available for each class in the namespace you're using. As you'll see over the next several weeks, it's very easy to start coding in Visual Studio .NET. The following code adds a reference to the data classes in both Visual Basic .NET and C#.
Imports System
Imports System.Data.SqlClient
Imports System.Data.OleDb
using System;
using System.Data.SqlClient;
using System.Data.OleDb;
On Day 10, "Accessing Data with ADO.NET," you learn more about the common FCL namespaces and assemblies, and how to write applications using them. For now, you can see that without the FCL, the common language runtime and Visual Studio .NET wouldn't be very easy tools to use. The key idea to grasp is that the FCL is 100% available to all .NET languages, so the FCLnamespace that implements file I/O capability in C# is the same FCL namespace that's used in Visual Basic .NET, J# .NET, and COBOL .NET.
What about C++?
With the introduction of Visual Studio .NET and great new languages like C# and Visual Basic .NET, Microsoft has also improved the C++ language. By providing Managed Extensions for C++, an application written in C++ can take advantage of the core features of .NET and the common language runtime. Garbage collection, cross-language debugging and code access security are all fundamental aspects of .NET, and are the foundation of the Visual Basic .NET and C# languages. Using Managed Extensions for C++, a traditional C++ developer can take advantage of the features of the .NET Framework directly from Visual Studio .NET, writing applications that contain both managed and unmanaged code. New project templates for C++ are built into VS.NET, and improved compiler options allow C++ applications written using VS.NET to live in the managed environment of the .NET Framework. All of the power and flexibility that has made C++ a great language is still there, the Managed Extensions take the language to the next level with the power and flexibility of the .NET Framework. Using Managed Extensions for C++ will allow you to create .NET classes that are callable from managed C++ or unmanaged C++ applications.
In this book, to reach the broadest audience possible, all of the code is written in either Visual Basic .NET or C#. If you are a C++ developer, and new to .NET, the syntax of C# will be familiar to you, and you will be able to write applications immediately using C#. Using Visual Studio .NET as your development tool will allow you to create applications faster and easier than ever, so you can look at this book as a reference on the tool, not the language. No matter what language you develop in, using VS.NET will allow you create better applications faster.
.NET Servers and the Future of .NET
The designers of the .NET Framework put much thought into how distributed computing should work. It seems that .NET is the next killer app, but to make the .NET Framework a widespread success, actual servers must be built using the .NET Framework. Currently, there are no true .NET servers. There are servers that take advantage of the common language runtime and its managed execution environment, but most servers from Microsoft today still run under COM and unmanaged code.
Commerce Server 2002 is positioned as a .NET server for e-commerce, and applications you design with it can be completely written using Visual Basic .NET or C#, but the underlying infrastructure of Commerce Server is still based on COM. Because rewriting server applications is a truly monumental task, the move to completely .NET servers could take several years. Along the way, there'll be servers such as Commerce Server 2002 that are half managed code and half unmanaged code. From a developer's viewpoint that's fine, because you don't want to write ASP and Visual Basic 6 code for server products while the rest of your distributed application development is in a .NET language.
Currently, Microsoft seems to be positioning server products as .NET Enterprise Servers if they can integrate XML Web services into their existing infrastructure. For example, SQL Server 2000 certainly isn't written in managed code, but there are add-ons to SQL Server 2000 that enable you to expose stored procedures as XML Web services. The SQL Server Notification Service is a .NET add-on that allows notification to .NET applications if certain events trigger in SQL. BizTalk server's purpose in life is the orchestration and automation of complex business processes, and it's positioned as a .NET server because of its capability to consume XML Web services. The following Microsoft server products are considered .NET Enterprise Servers because of their capability to at least interact with a distributed environment such as the Internet and have some relationship with the .NET Framework concepts:
• Internet Security and Acceleration Server
• Application Center 2000
• Commerce Server 2000 and Commerce Server 2002
• BizTalk Server 2000 and BizTalk Server 2002
• SQL Server 2000
• Exchange Server 2000
• Host Integration Server 2000
In my opinion, the fact that a .NET server is truly running under the common language runtime is not a deal breaker. For .NET to get to the next step, it must run on other operating systems, not just the Windows family of desktop and server operating systems. Currently, the Mono project is a grass-roots move to port the .NET Framework class library to the Linux operating system. That means the code you're writing now for Windows will also eventually run under Linux and, hopefully, Unix as well. You can learn more about the Mono project and where it currently is in the development process at http://www.go-mono.org. It would be a huge step forward if .NET were ported to the Macintosh operating system also. Although the Mac is still a small percentage of the overall market in desktop PCs, its incompatibility with Windows creates headaches for application developers. There needs to be consistency across platforms eventually.
Moving into the future with .NET, the sky seems to be the limit. This isn't necessarily because Microsoft is going to think of some great new thing to add to the .NET Framework, even though it most likely will, but it has to do with computing in general and the general infrastructure of our daily lives. As every household and business installs high-speed data access, and as computers become faster and cheaper, the applications you write will have a greater influence on how people look at what computer programs can do. You aren't bound to single servers anymore. Writing truly distributed and scalable applications is very easy because of the groundwork laid out by the .NET Framework. You can begin to look at the code you write not as blocks of modules running on a Windows 2000 Server, but as distributed objects that you can reuse in multiple applications across an enterprise simply by plugging them into an XML Web service. The future of .NET is the concept of a true distributed environment.
Summary
Today you learned about the core concepts of the .NET Framework and how it fits into the vision of .NET. The common language runtime, in conjunction with the .NET Framework class library, gives you the foundation in which to write distributed, scalable, and robust applications. Technologies such as the common type system, garbage collection, and the Common Language Specification make up the core infrastructure that help the common language runtime and the .NET Framework make your applications run better. Starting tomorrow, you'll learn the essentials of writing applications using the tools provided in Visual Studio .NET.
Day 1. Introduction to the Microsoft .NET Framework
This week you learn about the tools that Visual Studio .NET offers. You'll get a good understanding of what types of applications you can create using Visual Studio .NET. But before we get to that point, you must have a solid understanding of what the .NET Framework is and what it can do for you. Understanding the internals of the .NET Framework helps you better understand what's happening when Visual Studio .NET is helping you create applications. By the end of the day, you'll have better insight into the technologies that make up the .NET Framework, how the .NET Framework fits into Microsoft's vision of the future of computing, and how things are much different from the past.
What Is .NET?
When .NET was announced in late 1999, Microsoft positioned the technology as a platform for building and consuming Extensible Markup Language (XML) Web services. XML Web services allow any type of application, be it a Windows- or browser-based application running on any type of computer system, to consume data from any type of server over the Internet. The reason this idea is so great is the way in which the XML messages are transferred: over established standard protocols that exist today. Using protocols such as SOAP, HTTP, and SMTP, XML Web services make it possible to expose data over the wire with little or no modifications to your existing code.
Figure 1.1 presents a high-level overview of the .NET Framework and how XML Web services are positioned.
Figure 1.1 Stateless XML Web services model.
Since the initial announcement of the .NET Framework, it's taken on many new and different meanings to different people. To a developer, .NET means a great environment for creating robust distributed applications. To an IT manager, .NET means simpler deployment of applications to end users, tighter security, and simpler management. To a CTO or CIO, .NET means happier developers using state-of-the-art development technologies and a smaller bottom line. To understand why all these statements are true, you need to get a grip on what the .NET Framework consists of, and how it's truly a revolutionary step forward for application architecture, development, and deployment.
Windows of the Past
In the past, millions of applications were developed for Windows-based systems using a variety of development tools and languages. Visual Basic, C++, Delphi, Java, and Access provided a great toolset that enabled you to write applications for Windows. The problem that crept up again and again was how these applications communicated with each other and how they could communicate with data beyond the departmental server. Because each language has its own runtime environment, they all run essentially inside their own box, using their own way to communicate with core system services. There was no way to get outside the box. When a new feature to a language had to be added, it would be bolted somewhere on to the runtime environment through a new set of API calls. If you wanted to access the new features, each language had its own way of doing so. And, as was the case with Visual Basic, many features were simply not available because the runtime environment of Visual Basic couldn't support them. This problem seemed to have been solved with the Windows Distributed Internet Applications (DNA) architecture, which was based on Component Object Model (COM) components moving data between different types of distributed applications.
Windows DNA and COM
Writing distributed Internet applications became easier as the model of COM services that Windows servers could provide became more stable and widespread. You could write an Active Server Pages (ASP) application and access methods, properties, and events through the object model of components running inside of COM+ services on remote machines. Figure 1.2 shows the flow of a DNA/COM application.
Figure 1.2 DNA and COM in action.
Windows DNA became more accepted because of the ease with which a Visual Basic 6 developer could write components that could be accessed from any other type of application, as long as he had access to the Windows 2000 server that the COM+ services were running on. This is where the problems begin.
If you provide data to outside vendors in an application, you must write the user interface and code to allow them access to what they need. There's no simple way to expose methods, or allow other applications to call methods, on your servers. You have to open up security and give them the keys to the farm, which isn't what IT managers are likely to do. If you want to maintain multiple versions of a component, you are in a very bad way with COM. Because COM makes heavy use of the Registry and doesn't allow for a simple versioning policy, you're essentially maintaining the same component forever. You're constantly adding new features to it, but leaving the old stuff in. This is one of the big rules of COM: Thou shall not change any interfaces to your components. Doing so makes for a huge headache in deployment. If you change an in or out parameter on a method, you've broken the functionality of the component. That means all the components must be recompiled to restore the correct interfaces that the caller expects. After a component is deployed, how do you easily scale it across machines while maintaining any state data that the code expects? This isn't a trivial problem, and companies have spent millions of dollars writing state machines to handle the scalability problems that come with COM.
All these issues are solved with the .NET Framework. The services provided by the .NET Framework enable us to expose methods over HTTP and SOAP through XML Web services. The Windows Registry is not used in .NET, which eliminates the DLLHell of the past and gives us a strong versioning policy. There are many ways to maintain state data, so we can scale applications across processors and across servers with no worry about crashing the applications running on those servers. This all starts with the common language runtime and the base class libraries.
On Day 13, "XML Web Services in .NET," you learn more about the different protocols and messaging standards that make up anXML Web service.
The Common Language Runtime
At the heart of the .NET Framework is the common language runtime. The common language runtime is responsible for providing the execution environment that code written in a .NET language runs under. The common language runtime can be compared to the Visual Basic 6 runtime, except that the common language runtime is designed to handle all .NET languages, not just one, as the Visual Basic 6 runtime did for Visual Basic 6. The following list describes some of the benefits the common language runtime gives you:
• Automatic memory management
• Cross-language debugging
• Cross-language exception handling
• Full support for component versioning
• Access to legacy COM components
• XCOPY deployment
• Robust security model
You might expect all those features, but this has never been possible using Microsoft development tools. Figure 1.3 shows where the common language runtime fits into the .NET Framework.
Figure 1.3 The common language runtime and the .NET Framework.
Code written using a .NET language is known as managed code. Code that uses anything but the common language runtime is known as unmanaged code. The common language runtime provides a managed execution environment for .NET code, whereas the individual runtimes of non-.NET languages provide an unmanaged execution environment.
Inside the Common Language Runtime
The commonlanguage runtime enables code running in its execution environment to have features such as security, versioning, memory management, and exception handling because of the way .NET code actually executes. When you compiled Visual Basic 6 forms applications, you had the ability to compile down to native node or p-code. Figure 1.4 should refresh your memory of what the Visual Basic 6 options dialog looked like.
Figure 1.4 Visual Basic 6 compiler options dialog.
When you compile your applications in .NET, you aren't creating anything in native code. When you compile in .NET, you're converting your code—no matter what .NET language you're using—into an assembly made up of an intermediate language called Microsoft Intermediate Language (MSIL or just IL, for short). The IL contains all the information about your application, including methods, properties, events, types, exceptions, security objects, and so on, and it also includes metadata about what types in your code can or cannot be exposed to other applications. This was called a type library in Visual Basic 6 or an IDL (interface definition language) file in C++. In .NET, it's simply the metadata that the IL contains about your assembly.
The file format for the IL is known as PE (portable executable) format, which is a standard format for processor-specific execution.
When a user or another component executes your code, a process occurs called just-in-time (JIT) compilation, and it's at this point that the IL is converted into the specific machine language of the processor it's executing on. This makes it very easy to port a .NET application to any type of operating system on any type of processor because the IL is simply waiting to be consumed by a JIT compiler.
The first time an assembly is called in .NET, the JIT process occurs. Subsequent calls don't re-JIT the IL; the previously JITted IL remains in cache and is used over and over again. On Day 5, "Writing ASP.NET Applications," you learn more about the JITting process and how it can affect your ASP.NET applications. On Day 19, "Understanding Microsoft Application Center Test," when you learn about Application Center Test, you also see how the warm-up time of the JIT process can affect application performance.
Understanding the process of compilation in .NET is very important because it makes clear how features such as cross-language debugging and exception handling are possible. You're not actually compiling to any machine-specific code—you're simply compiling down to an intermediate language that's the same for all .NET languages. The IL produced by J# .NET and C# looks just like the IL created by the Visual Basic .NET compiler. These instructions are the same, only how you type them in Visual Studio .NET is different, and the power of the common language runtime is apparent.
When the IL code is JITted into machine-specific language, it does so on an as-needed basis. If your assembly is 10MB and the user is only using a fraction of that 10MB, only the required IL and its dependencies are compiled to machine language. This makes for a very efficient execution process. But during this execution, how does the common language runtime make sure that the IL is correct? Because the compiler for each language creates its own IL, there must be a process that makes sure what's compiling won't corrupt the system. The process that validates the IL is known as verification. Figure 1.5 demonstrates the process the IL goes through before the code actually executes.
Figure 1.5 The JIT process and verification.
When code is JIT compiled, the common language runtime checks to make sure that the IL is correct. The rules that the common language runtime uses for verification are set forth in the Common Language Specification (CLS) and the Common Type System (CTS).
Understanding the Common Language Specification
The CLS describes the concrete guidelines that make a .NET language compliant with the common language runtime. That doesn't mean a .NET language can't have language-specific features, but it does indicate that to be considered a .NET language, the language must comply with the set of requirements set forth in the CLS. All features added to a .NET language and that aren't part of the CLS won't be exposed to other .NET languages at runtime.
If your code is fully CLS compliant, it's guaranteed to interoperate with all other components written in any .NET language. Certain languages, such as C#, attempt to accommodate developers moving from C and C++ with the similarity of their syntaxes. Because C# attracts such developers, it includes functionality familiar from their native languages, such as pointers and code access to unsafe memory blocks. This functionality is not CLS compliant and won't be accessible by other .NET languages, but it's allowed by the common language runtime and the language-specific compilers. To make sure that your code is CLScompliant, compilers such as C# include checks for non-CLS-compliant code through the use of attributes. If you apply theCLSCompliantAttribute attribute to a class or method in your code and the code isn't CLS compliant, an error occurs and the compile fails. The following code demonstrates how to apply the CLSCompliantAttribute attribute in your code:
using System;
[assembly: CLSCompliantAttribute(true)]
[CLSCompliantAttribute(true)]
public class Class1
{
public void x(UInt32 x){}
public static void Main( )
{
}
}
In this case, the code won't compile because unsigned integers aren't part of the CLS.
The second part of the verification process that the JIT compiler goes through to make sure that your code executes correctly is the verification of types. All types used in .NET must conform to the CTS.
Understanding the Common Type System
The CTS sets forth the guidelines for data type safety in .NET.
In the past, there were no rules for type safety across execution runtimes, hence the general protection fault (GPF) and blue screen of death errors that could occur when running applications. The culprit behind those meltdowns was the overlapping of memory by data types. This was a common occurrence in Windows 3.1, Windows 95, and Windows 98. When a Visual Basic developer deployed a new application, fingers had to be crossed to make sure that the data types and memory access between the newly installed DLLs and the existing ones on the system mingled happily. Most of the time they did, but when they didn't, errors occurred.
In .NET, the CTS defines types and how they can act within the bounds of the common language runtime. There are two type classifications in .NET: value types and reference types.
Value Types
Value types directly contain the data you assign them. They're built into the common language runtime and derive directly from the base System.Object type. Examples of value types are primitive types, structures, and enumerations. Primitive types can be further broken down into numbers, such as Boolean, byte, short, integer, long, single, double, decimal, date, and char.
Reference Types
Reference types don't directly contain any data; rather, they point to a memory location that contains the actual data. Reference types are built into the common language runtime and derive directly from the base System.Object type. Some examples of reference types are strings, classes, arrays, delegates, and modules (see Figure 1.6).
Figure 1.6 The common type system defined.
To make the difference between Value types and Reference types clearer, consider the following code. It accesses a primitive type (which is a value type) and a class (which is a reference type), and attempts to assign values to them.
using System;
namespace cSharp_ValueReference
{
class Class1
{
static public int x;
[STAThread]
static void Main(string[] args)
{
x=4;
int y;
y = x;
x=0;
// Since each Value type contains its own data,
// modifying the variable X after setting Y to the value
// of X does not affect either variable
Console.WriteLine(x);
Console.WriteLine(y);
// Create an instance of Class2
Class2 ref1 = new Class2();
// Set the refValue of this instance to 5
ref1.refValue=5;
// Create an object reference to the ref1 class
Class2 ref2 = ref1;
// Set the refValue of the object
ref2.refValue=10;
// Notice how the results are the same, even
// though you set re1.refValue to 5, the reference
// to this memory was overridden by the value of 10
Console.WriteLine(ref1.refValue);
Console.WriteLine(ref2.refValue);
Console.ReadLine();
}
}
class Class2
{
public int refValue;
}
}
Module Module1
Sub Main()
Dim X As Integer = 4
Dim Y As Integer
intY = X
intX = 0
Console.WriteLine(X)
Console.WriteLine(Y)
Dim ref1 As Class2 = New Class2()
ref1.refValue = 5
Dim ref2 As Class2 = ref1
ref2.refValue = 10
Console.WriteLine(ref1.refValue)
Console.WriteLine(ref2.refValue)
Console.ReadLine()
End Sub
End Module
Class Class2
Public refValue As Integer
End Class
In both examples, the values of the value type variables X and Y are 0 and 4, whereas the values of the reference types ref1 andref2 are both 10. Because the reference type points to the same memory allocation for the initial object ref1, the value for all variables set to an instance of that object is always the last value assigned. Figure 1.7 shows the console output of the code.
Figure 1.7 Value and reference type test output.
You don't normally get into much trouble when dealing with reference types and value types like the example describes. When you're creating instances of classes, always derive from a new instance of the object, not a previously set instance.
Now that you have an understanding of what the CTS is and how it works, you need to see how the types are removed from memory. Removing types that are no longer referenced in your applications is known as garbage collection.
Handling Memory and Garbage
The common language runtime handles all memory allocation and management that your application requires. This includes the initial allocation that occurs when you declare an object and store data in it, and the release of memory back to the operating system when the object is no longer in use. The automatic garbage collection of unused objects solves all the inherent problems of Win32-based applications when it comes to the mysterious resource losses that Windows would succumb to after running applications.
Memory management is improved with each new version of the Windows operating system, but the fault is not completely that of the operating system. If you're writing C++ applications, it's very easy to forget to destroy object handles and memory blocks after they've been used. When this happens, there's no way for Windows to recover that memory until the machine is rebooted. In Visual Basic 6, you had to set all your object instances toNothing to guarantee that memory would be freed after an object was used. The limitations for the runtime environments of all languages lead to the problems of resource loss in Windows. So, in the end, it isn't really the fault of Windows—it's the fault of the developers writing the code that runs in Windows.
The garbage collection mechanism used in .NET is very simple and can be summed up in the following steps:
1. The garbage collector (GC) allocates memory resources on the managed heap when a process starts and sets a pointer to that memory allocation.
2. Each object created for the allocated resource is given the next address space in the managed heap when it's created.
3. The GC continuously scans the managed heap for objects that are out of scope and no longer in use.
4. The GC reclaims stack heaps that it determines are out of scope and compacts the managed heap for the running process.
This four-step process occurs over and over during the execution lifetime of your application. Under the hood, the GC divides the managed heap running your processes into three generations. Each generation is examined separately by the GC based on when the objects on the heap were created and their dependency to each other. This mechanism improves the overall performance of garbage collection because constantly scanning the entire managed heap for unused resources would be processor-intensive and time-consuming. By splitting apart when and where objects are created, the process of garbage collection can effectively determine what objects are in use and what objects are out of scope.
Although the GC can handle the destruction of most objects on the managed heap, objects such as file handles, network handles, database connections, and window handles are unmanaged resources that are created on the managed heap. These resources can be given the correct memory allocation, and the GC knows when they are out of scope, but it doesn't know when to destroy those objects to reclaim the memory on the stack. To reclaim memory from unmanaged resources, you must explicitly destroy the objects by creating the necessary cleanup code to implement the IDisposable interface and override theDispose method of the object. This isn't always necessary, and should be used only if you know that a resource must be freed when your component is no longer being used.
If you're using an object and you know it's a CTS-compliant managed type, the automatic garbage collection handles reclaiming the resource. Haphazardly calling the Dispose method on objects consumes resources and forces garbage collection. When writing components that use unmanaged resources, you can close file handles and network handles in the Dispose method, and the normal process of garbage collection destroys the object and reclaims the memory allocation.
Because the common language runtime determines when garbage collection takes place, it's referred to as nondeterministic finalization. In other words, you have no idea when the finalize method, which marks an object for collection, will occur.
The reason that understanding the existence of Dispose method is important is because of an unlikely worst-case scenario in which object resources aren't freed and a component attempts to create them again. This situation could occur if the system running a component is depleting its resources and garbage collection isn't occurring on a regular basis. The following code demonstrates how to implement the Dispose method when creating a Windows User Control and implementing a database connection.
Imports System.Data.SqlClient
Public Class UserControl1
Inherits System.Windows.Forms.UserControl
Private cn As New SqlConnection()
Public Sub New()
MyBase.New()
cn.ConnectionString = "uid=sa;pwd=;database=pubs;server=."
cn.Open()
InitializeComponent()
End Sub
Protected Overloads Overrides Sub Dispose(ByVal disposing As Boolean)
If disposing Then
If Not (components Is Nothing) Then
components.Dispose()
cn.Close()
cn = Nothing
End If
End If
MyBase.Dispose(disposing)
End Sub
Private components As System.ComponentModel.IContainer
Private Sub InitializeComponent()
Me.Name = "UserControl1"
End Sub
End Class
using System;
using System.Collections;
using System.ComponentModel;
using System.Drawing;
using System.Data;
using System.Windows.Forms;
using System.Data.SqlClient;
namespace cSharpDispose
{
public class UserControl1 : System.Windows.Forms.UserControl
{
private System.ComponentModel.Container components = null;
private SqlConnection cn;
public UserControl1()
{
InitializeComponent();
cn.ConnectionString=
"database=pubs;server=localhost;uid=sa;pwd=";
cn.Open();
}
protected override void Dispose( bool disposing )
{
if( disposing )
{
if( components != null )
components.Dispose();
cn.Close();
cn=null;
}
base.Dispose( disposing );
}
private void InitializeComponent()
{
this.Name = "UserControl1";
this.Load += new
System.EventHandler(this.UserControl1_Load);
}
}
}
As you can see, implementing Dispose is a simple task. By default, any class that derives from System.ComponentModel.Componenthas a Dispose method that you can override. If you're writing a component that doesn't derive fromSystem.ComponentModel.Component, you can implement the IDisposable interface and create your own Dispose method.
The .NET Framework Class Library
The second most important piece of the .NET Framework is the .NET Framework class library (FCL). As you've seen, the common language runtime handles the dirty work of actually running the code you write. But to write the code, you need a foundation of available classes to access the resources of the operating system, database server, or file server. The FCL is made up of a hierarchy of namespaces that expose classes, structures, interfaces, enumerations, and delegates that give you access to these resources.
The namespaces are logically defined by functionality. For example, the System.Data namespace contains all the functionality available to accessing databases. This namespace is further broken down into System.Data.SqlClient, which exposes functionality specific to SQL Server, and System.Data.OleDb, which exposes specific functionality for accessing OLEDB data sources. The bounds of a namespace aren't necessarily defined by specific assemblies within the FCL; rather, they're focused on functionality and logical grouping. In total, there are more than 20,000 classes in the FCL, all logically grouped in a hierarchical manner. Figure 1.8 shows where the FCL fits into the .NET Framework and the logical grouping of namespaces.
Figure 1.8 The .NET Framework class library.
To use an FCL class in your application, you use the Imports statement in Visual Basic .NET or the using statement in C#. When you reference a namespace in Visual Basic .NET or C#, you also get the convenience of auto-complete and auto-list members when you access the objects' types using Visual Studio .NET. This makes it very easy to determine what types are available for each class in the namespace you're using. As you'll see over the next several weeks, it's very easy to start coding in Visual Studio .NET. The following code adds a reference to the data classes in both Visual Basic .NET and C#.
Imports System
Imports System.Data.SqlClient
Imports System.Data.OleDb
using System;
using System.Data.SqlClient;
using System.Data.OleDb;
On Day 10, "Accessing Data with ADO.NET," you learn more about the common FCL namespaces and assemblies, and how to write applications using them. For now, you can see that without the FCL, the common language runtime and Visual Studio .NET wouldn't be very easy tools to use. The key idea to grasp is that the FCL is 100% available to all .NET languages, so the FCLnamespace that implements file I/O capability in C# is the same FCL namespace that's used in Visual Basic .NET, J# .NET, and COBOL .NET.
What about C++?
With the introduction of Visual Studio .NET and great new languages like C# and Visual Basic .NET, Microsoft has also improved the C++ language. By providing Managed Extensions for C++, an application written in C++ can take advantage of the core features of .NET and the common language runtime. Garbage collection, cross-language debugging and code access security are all fundamental aspects of .NET, and are the foundation of the Visual Basic .NET and C# languages. Using Managed Extensions for C++, a traditional C++ developer can take advantage of the features of the .NET Framework directly from Visual Studio .NET, writing applications that contain both managed and unmanaged code. New project templates for C++ are built into VS.NET, and improved compiler options allow C++ applications written using VS.NET to live in the managed environment of the .NET Framework. All of the power and flexibility that has made C++ a great language is still there, the Managed Extensions take the language to the next level with the power and flexibility of the .NET Framework. Using Managed Extensions for C++ will allow you to create .NET classes that are callable from managed C++ or unmanaged C++ applications.
In this book, to reach the broadest audience possible, all of the code is written in either Visual Basic .NET or C#. If you are a C++ developer, and new to .NET, the syntax of C# will be familiar to you, and you will be able to write applications immediately using C#. Using Visual Studio .NET as your development tool will allow you to create applications faster and easier than ever, so you can look at this book as a reference on the tool, not the language. No matter what language you develop in, using VS.NET will allow you create better applications faster.
.NET Servers and the Future of .NET
The designers of the .NET Framework put much thought into how distributed computing should work. It seems that .NET is the next killer app, but to make the .NET Framework a widespread success, actual servers must be built using the .NET Framework. Currently, there are no true .NET servers. There are servers that take advantage of the common language runtime and its managed execution environment, but most servers from Microsoft today still run under COM and unmanaged code.
Commerce Server 2002 is positioned as a .NET server for e-commerce, and applications you design with it can be completely written using Visual Basic .NET or C#, but the underlying infrastructure of Commerce Server is still based on COM. Because rewriting server applications is a truly monumental task, the move to completely .NET servers could take several years. Along the way, there'll be servers such as Commerce Server 2002 that are half managed code and half unmanaged code. From a developer's viewpoint that's fine, because you don't want to write ASP and Visual Basic 6 code for server products while the rest of your distributed application development is in a .NET language.
Currently, Microsoft seems to be positioning server products as .NET Enterprise Servers if they can integrate XML Web services into their existing infrastructure. For example, SQL Server 2000 certainly isn't written in managed code, but there are add-ons to SQL Server 2000 that enable you to expose stored procedures as XML Web services. The SQL Server Notification Service is a .NET add-on that allows notification to .NET applications if certain events trigger in SQL. BizTalk server's purpose in life is the orchestration and automation of complex business processes, and it's positioned as a .NET server because of its capability to consume XML Web services. The following Microsoft server products are considered .NET Enterprise Servers because of their capability to at least interact with a distributed environment such as the Internet and have some relationship with the .NET Framework concepts:
• Internet Security and Acceleration Server
• Application Center 2000
• Commerce Server 2000 and Commerce Server 2002
• BizTalk Server 2000 and BizTalk Server 2002
• SQL Server 2000
• Exchange Server 2000
• Host Integration Server 2000
In my opinion, the fact that a .NET server is truly running under the common language runtime is not a deal breaker. For .NET to get to the next step, it must run on other operating systems, not just the Windows family of desktop and server operating systems. Currently, the Mono project is a grass-roots move to port the .NET Framework class library to the Linux operating system. That means the code you're writing now for Windows will also eventually run under Linux and, hopefully, Unix as well. You can learn more about the Mono project and where it currently is in the development process at http://www.go-mono.org. It would be a huge step forward if .NET were ported to the Macintosh operating system also. Although the Mac is still a small percentage of the overall market in desktop PCs, its incompatibility with Windows creates headaches for application developers. There needs to be consistency across platforms eventually.
Moving into the future with .NET, the sky seems to be the limit. This isn't necessarily because Microsoft is going to think of some great new thing to add to the .NET Framework, even though it most likely will, but it has to do with computing in general and the general infrastructure of our daily lives. As every household and business installs high-speed data access, and as computers become faster and cheaper, the applications you write will have a greater influence on how people look at what computer programs can do. You aren't bound to single servers anymore. Writing truly distributed and scalable applications is very easy because of the groundwork laid out by the .NET Framework. You can begin to look at the code you write not as blocks of modules running on a Windows 2000 Server, but as distributed objects that you can reuse in multiple applications across an enterprise simply by plugging them into an XML Web service. The future of .NET is the concept of a true distributed environment.
Summary
Today you learned about the core concepts of the .NET Framework and how it fits into the vision of .NET. The common language runtime, in conjunction with the .NET Framework class library, gives you the foundation in which to write distributed, scalable, and robust applications. Technologies such as the common type system, garbage collection, and the Common Language Specification make up the core infrastructure that help the common language runtime and the .NET Framework make your applications run better. Starting tomorrow, you'll learn the essentials of writing applications using the tools provided in Visual Studio .NET.
thanku madam for posting this for us.
ReplyDelete