Sunday, December 24, 2006

SOME LOW LEVEL CONCEPTS

People who work on MS.NET technologies are pretty much clear about the technical facts that MS.NET follows but very few are sure of the LOW LEVEL CONCEPTS that MS.NET is based on. Through this article I am targeting those types of developers, Professionals and of course learners.

Let’s start with a very simple “Hello world” Program here. What we do, open a notepad, write the program and save it to some location. (People who use Visual studio.net can use the templates provide by Microsoft for creating simple console application).

Let’s say I have written a program “HelloWorld.cs” and saved it to “C: Drive”. Let me tell you the steps what happens one by one. After saving the program the next step is to compile the program using some compiler, in our case the program is written in C# so we will use a C# compiler. It does make any difference in which language we write program for compiler, e.g. if we write program in C#, we have to use C# complier and same for VB, J#, Python etc.

After compilation process, regardless of which compiler we use, the result is a managed module. A managed module is a standard 32 bit Microsoft Windows Portable Executable (PE32) or a standard 64 bit windows portable executable (PE32+) file. This means that if we are targeting 32 bit platforms then the result will be PE32 file and PE32+ for 64 bit platforms like Windows XP 64 bit operating systems.

Now after compilation process, we got the managed module. I must tell you, what we have in these PE files.

The first stuff that PE (Managed Module) contains is the PE header, which can be PE32 (if file is targeted to run both on Windows 32 bit or Windows 64 bit versions) or PE32+( if file is targeted to run only on 64 bit versions of windows). This header also includes the type of file, which can be GUI (Graphical User Interface), CUI(Character User Interface) or DLL(Dynamic Link Library) and also a timestamp which tell us that when the file was built.

The second stuff is the CLR Header, before telling you about CLR header, let me clear you the fundamentals of CLR, CLR stands for Common Language Runtime, as the name suggests it is the runtime that is used by different and varied programming language, CLR has no idea which programming language the developer used for the source code. Fair enough about CLR, now come to the point, i.e. concept of CLR header. It contains the information that makes the managed module a managed module.

The CLR header includes.
(a) Version of the CLR required
(b) Some Flags
(c) Main Method
(d) Location and size of the
a. Managed Module metadata
b. Resources
c. Strong Name
d. Some Flags
e. And other less interesting stuffs


The third stuffs that Managed module contains are the METADATA.

There are two main types of tables in.
(a) Tables that describe the types and members defined in “HelloWorld.cs”
(b) Tables that describe the types and members referenced by our source code i.e. “HelloWorld.cs”

The fourth stuff that we have in Managed module is INTERMEDIATE LANGUAGE CODE also called MSIL or shortly IL Code or sometimes MANAGED CODE because CLR manages its execution. This is the code that compiler produces as it compile the source code.

One or more managed modules with Resource files (Optional) are converted to assemblies which also contains MENIFEST other than these stuffs. It is ASSEMBLY that is targeted by CLR for execution.

Hope you are bit clear with the fundamentals of MANAGED MODULES. Comments will be highly appreciated.

Tuesday, December 19, 2006

Ye KT kya hai bhai..(The power of Knowlwdge Transfer Session)



One of my best friends in Chennai was supposed to reach my residence this evening but unfortunately he will not come.

I got a call from him that he has to cancel his flight.

I asked “Why so?”

He told me that since some new bugs has come this morning so he has to stay in the office to keep track of bug fixing and of course to help his team members to engineer the same.

He was quite upset because of the cancellation of the flight.

I asked him the fundamental reasons why you need to stay there as there is lot of other team member are there who can happily handle these bugs.

He replied “I am the only people who have complete idea of functionality as well as technology”

I surprised and told him why didn’t you given any Knowledge Transfer Sessions to your team members, when ever you were having no any bugs.

He replied “I think you are quite correct, but I never got time to do that.”

Now a day we give more time to achieve the deadlines rather than creating better quality software, I mean healthier systems. People have less time to enhance quality than engineer the basic system, which is the reason I think support projects are more in number these days.

What happen to this project if he will leave the company after a period?

Now a day where attrition rate is so high, the concept of KT (Knowledge Transfer Sessions) is very much required. Even if no team member is supposed to leave the company or project, we should create a habit of knowledge sharing which enhance our product knowledge as well as our interest in the project. Working in project does mean that if you are working on module A then you must have some high-level view of the module B as well as high level functionality of the project (Low level functionality is better).
No one knows “kab kaun chala jaye..kab kaun bimar par jaye ( who will go out when from project. when some get ill)”









Sunday, December 17, 2006

Which is better to do J2EE or MS.NET from Industry Prospective?



I met one of old friend of mine on MCA examination center in New Delhi. He has completed BIT from IGNOU with me only in 2004.

I reached the examination center very early, I guess around 12:30 PM (IST) (Exam was supposed to start at 2 PM), so I was having too much time to relax before the examination, I was just scrambling around the MCA examination center aiming to find any of my old friend so that we can have a chat, of course on past life at PCTI, Pitampura, it was my Learning center for my BIT course.

Finally around 1:00 PM (IST) I met the guy I have just talked. He was just surprised to see me that I have also come for the examination. After a little bit old gossips. He came to the point and asked me the same question that my batch mates ask me.

“Which is better to do J2EE or MS.NET from Industry Prospective?”

I took few seconds to tell him “Go for .NET technologies

And as normal phenomenon, he asked me again “Why so…why not J2EE…

After that we just talked about the benefits of .NET over J2EE.

The first point I have discussed that Ease of Use and easy to Learn

Microsoft .NET offers a better integrated lower cost, easier to use, and more manageable environment for software development than the Java J2EE platform. It also offers a much better way to take advantage of low-cost Intel-based servers for enterprise-scale applications.

Relative to .NET, enterprise software development on the J2EE platform is like trying to count a herd of sheep by counting the legs and dividing by four. Sure, it can be done, but it takes longer, costs more, and is harder to change in response to future business challenges and opportunities.

A few years ago J2EE was a more competitive option because the Windows/Intel server platform was still not quite ready for very large-scale enterprise application deployment. That is no longer so. With Windows Server 2003, increasingly capable Intel-based server hardware, and the rapidly maturing Window/Intel server applications from Microsoft and other software vendors, Microsoft .NET is now able to meet even very large-scale enterprise applications requirements.

He asked again “What circumstances favor .NET over J2EE?”

The most important advantage for .NET over J2EE is in circumstances where minimization of total cost is a high priority. Notwithstanding "figures don't lie but . . ." claims to the contrary, Windows on Intel servers delivers much more bang for the buck than the mainframe or UNIX-based platforms typically used for J2EE deployment. If total cost of ownership (TCO) really matters then .NET is the obvious choice over J2EE.


A second important advantage for .NET over J2EE is anywhere Windows-based server infrastructure and related development and deployment skills are already in place. An organization with skilled BASIC, C, C++, and COBOL programmers already in place, that is already familiar with the Windows development environment, will get much further much faster and at much lower cost using .NET than by trying to turn everyone into Java programmers.

Third, projects aiming to take advantage of the opportunities created by the new Web services standards favor .NET over J2EE as well. Although the Java world is working hard to catch up, crucial Web services standards like XML, SOAP, and WSDL are built into .NET by design while they are still only, in effect, 'bolted on' to J2EE. Development and deployment of Web services applications is significantly easier, faster, and less costly on the .NET platform than it is on J2EE.

Overall, .NET is the better choice.

But I guess some of my friends who are working with J2EE technologies convinced him that J2EE is better and since we have also learned Java better in BIT course J2EE will be better option for you.

So he asked again “Are there any circumstances that favor J2EE over .NET?”

The obvious one is when applications, for one reason or another, absolutely must run on something other than Windows on Intel systems. If an application really must run on, say, IBM mainframes or Sun Solaris boxes, then Java/J2EE may be the only option (albeit a costly one).

The second is simply prior commitment to the Java platform and associated ready availability of Java/J2EE development, deployment and administration skills in-house. In effect, the more an organization or some organizational sub-unit is already using Java/J2EE, the more circumstances are likely to favor continuing to do so. But despite promises of easy, rapid development, the J2EE platform is a daunting one with a steep and difficult learning curve. So J2EE is a plausible option only for organizations that have already climbed that slope and paid the price of learning the platform -- not for those that have not already done so.Third, J2EE is a somewhat more appealing choice for organizations where there is a large proportion of existing UNIX-based IT infrastructure (e.g. Solaris, HP/UX, AIX and so forth) already in place. As a rough rule of thumb, the greater the proportion of UNIX-based servers already in place, the greater the relative advantage for J2EE versus .NET.

Then I told him a story to explain him that how big organizations are using .NET for building Web applications.

MIT have chosen .NET over J2EE as the toolkit for developing Internet applications. MIT comes from the open-source and UNIX world (and a lot of people there do anything they can to avoid anything Microsoft-related), so it would make sense for them to choose a more vendor-neutral platform (e.g., Apache/Tomcat/Java) and all the other open development tools available.

Would that be a fashion statement as well? What are the particular advantages of .NET over J2EE that made it the superior choice? In my mind, I would look at a solution that didn't require paying Microsoft's excessive licensing fees, and that used as much open-source software as possible.

And what about security?

A poorly-configured Apache server can be as bad as a poorly configured IIS server, but isn't it apparent that .NET servers will be less secure than their open-source counterparts (given the security track record of Microsoft products --at least in the short term)?

MIT as a whole is pretty wealthy but individual IT budgets tend to be modest. With only $1 or $2 million to spend per system MIT needs to put its money into capabilities that are valued by end-users. A true J2EE system would involve EJB and container-managed persistence. All of that automatically generated SQL code from the application server can be 100X slower than hand-authored SQL, which means MIT would have to buy 100 times as much computing hardware to support the same application.

Not to mention that projects that built on top of J2EE are famously unproductive, expensive, behind-schedule, and inflexible. If you had a $20 million, 3-year budget to build something simple like photo.net, you could certainly do it with J2EE (or raw C with the Oracle C library for that matter). But MIT needs to produce things sort of like photo.net with a few undergrads hacking away over a summer. And they need to be able to tear down 20 percent of it and add another 50 percent the next summer as ideas evolve.

J2EE also looks like a classical IT dead end. There are dozens of different application servers and execution environments for Java. All have subtle differences so that an application built for, say, WebLogic won't run under Tomcat or Websphere. Maybe the application be ported by changing only 1 percent of the code but it isn't obvious which 1 percent. This is the same situation that Unix presented in the early 1990s. An AIX application wouldn't just work on SunOS or HP-UX. Close but not compatible. Faced with all of these choices, most application developers chose to develop only for Windows. In the amount of time it would take MIT to evaluate and select the right Java application server, V1.0 of a system could be built and launched using the Microsoft tools.
Anyway, it wasn't my decision. So if MIT wants something that can be maintained and extended by its students and graduates.

Microsoft .NET is a logical choice for that reason as well.





Please explain the life cycle of the ASP.net pages from start to end

One of the common question, I have been asked in almost interviews related with MS.Net is “Please explain the life cycle of the ASP.net pages; please explain me from start to end”. People
generally tell the interviewer the most common steps. So be preparing for complete
discussion on that now onwards.

Each request for a Microsoft® ASP.NET page that hits Microsoft® Internet Information Services (IIS) is handed over to the ASP.NET HTTP pipeline. The HTTP pipeline is a chain of managed objects that sequentially process the request and make the transition from a URL to plain HTML text happen. The entry point of the HTTP pipeline is the HttpRuntime class. The ASP.NET infrastructure creates one instance of this class per each AppDomain hosted within the worker process (remember that the worker process maintains one distinct AppDomain
per each ASP.NET application currently running).

The HttpRuntime class picks up an HttpApplication object from an internal pool and sets it to work on the request. The main task accomplished by the HTTP application manager is finding out the class that will
actually handle the request. When the request is for an .aspx resource, the handler is a page handler—namely, an instance of a class that inherits from Page. The association between types of resources and types of handlers is stored in the configuration file of the application. More exactly, the default set of mappings is defined in the <httpHandlers> section of the machine.config file. However, the application can customize the list of its own
HTTP handlers in the local web.config file. The line below illustrates the code that defines the HTTP handler for .aspx resources.

<add verb="*" path="*.aspx" type="System.Web.UI.PageHandlerFactory"/> 

An extension can be associated with a handler class, or more in general, with a handler factory class. In all cases, the HttpApplication object in charge for the request gets an object that implements the IHttpHandler interface. If the association resource/class is resolved in terms of a HTTP handler, then the returned class will implement the interface directly. If the resource is bound to a handler factory, an extra step is necessary. A handler
factory class implements the IHttpHandlerFactory interface whose GetHandler method will return
an IHttpHandler-based object.

How can the HTTP run time close the circle and process the page request? The IHttpHandler interface features the ProcessRequest method. By calling this method on the object that represents the requested page, the ASP.NET infrastructure starts the process that will generate the output for the browser.

The Real Page Class

The type of the HTTP handler for a particular page depends on the URL. The first time
the URL is invoked, a new class is composed and dynamically compiled to an assembly. The source code of the class is the outcome of a parsing process that examines the .aspx sources. The class is defined as part of the namespace ASP and is given a name that mimics the original URL. For example, if the URL endpoint is page.aspx, the name of the class is ASP.Page_aspx. The class name, though, can be programmatically controlled by setting the ClassName attribute in the @Page directive.

The base class for the HTTP handler is Page. This class defines the minimum set of methods and properties shared by all page handlers. The Page class implements the IHttpHandler interface.

Under a couple of circumstances, the base class for the actual handler is not Page but a different class. This happens, for example, if code-behind is used. Code-behind is a development technique that insulates the code necessary to a page into a separate C# or Microsoft Visual Basic® .NET class. The code of a page is the set of event handlers and helper methods that actually create the behavior of the page. This code can be defined inline using the <script runat=server> tag or placed in an external class—the code-behind class. A code-behind class is a class that inherits from Page and specializes it with extra methods. When specified, the code-behind class is used as the base class for the HTTP handler.

The other situation in which the HTTP handler is not based on Page is when the configuration file of the application contains a redefinition for the PageBaseType attribute in the <pages>
section.

<pages PageBaseType="Classes.MyPage, mypage" /> 
The PageBaseType attribute indicates the type and the assembly that contains the base class for page handlers. Derived from Page,
this class can automatically endow handlers with a custom and extended set of methods and properties.

The Page Lifecycle

Once the HTTP page handler class is fully identified, the ASP.NET run time calls the
handler's ProcessRequest method to process the
request. Normally, there is no need to change the implementation of the method
as it is provided by the Page class.

This implementation begins by calling the method FrameworkInitialize, which builds the controls tree for the page. The method is a protected and virtual member of the TemplateControl class—the class from which Page itself
derives. Any dynamically generated handler for an .aspx resource overrides FrameworkInitialize. In
this method, the whole control tree for the page is built.

Next, ProcessRequest makes the page transit various phases: initialization, loading of view state information and postback data, loading of the page's user code and execution of postback server-side events. After that,
the page enters in rendering mode: the updated view state is collected; the HTML code is generated and then sent to the output console. Finally, the page is unloaded and the request is considered completely served.

During the various phases, the page fires a few events that Web controls and user-defined code can intercept and handle. Some of these events are specific for embedded controls and subsequently can't be handled at the level of the .aspx code.

A page that wants to handle a certain event should explicitly register an appropriate handler. However, for backward compatibility with the earlier Visual Basic programming style, ASP.NET also supports a form of implicit event hooking. By default, the page tries to match special method names with events; if a match is found, the method is considered a handler for the event. ASP.NET provides special recognition of six method names. They are Page_Init,
Page_Load, Page_DataBind,


Page_PreRender, and Page_Unload.
These methods are treated as handlers for the corresponding events exposed by
the Page class. The HTTP run time will automatically bind these methods
to page events saving developers from having to write the necessary glue code.
For example, the method named Page_Load is
wired to the page's Load event as if the following code was written.

this.Load += new EventHandler(this.Page_Load); 

The automatic recognition of special names is a behavior under the control of the AutoEventWireup attribute of the @Page directive. If the attribute is set to false, any applications that wish to handle an event need to connect explicitly to the page event. Pages that don't use automatic event wire-up will get a slight performance boost by not having to do the extra work of matching names and events. You should note that all Microsoft Visual Studio® .NET projects are created with the AutoEventWireup attribute disabled. However, the default setting for the attribute is true, meaning that methods such as Page_Load are recognized and bound to the associated event.

The execution of a page consists of a sequence of stages listed in the following
table and is characterized by application-level events and/or protected, overridable methods.

Table 1. Key events in the life of an ASP.NET page



























































Stage



Page Event



Overridable method



Page
initialization



Init





View state
loading





LoadViewState



Postback data processing





LoadPostData method in any control that implements
the IPostBackDataHandler interface



Page loading



Load





Postback change notification





RaisePostDataChangedEvent method in any
control that implements the IPostBackDataHandler
interface



Postback event handling



Any postback event defined by controls



RaisePostBackEvent method in any control that implements
the IPostBackEventHandler interface



Page
pre-rendering phase



PreRender





View state
saving





SaveViewState



Page rendering





Render



Page unloading



Unload






Some of the stages listed above are not visible at the page level and affect only
authors of server controls and developers who happen to create a class derived from Page. Init, Load, PreRender,
Unload, plus all postback events defined by embedded controls are the only signals of life that a page sends to the
external world.

Stages of Execution

The first stage in the page lifecycle is the initialization. This stage is characterized by the Init event, which fires to the application after the page's control tree has been successfully created. In other words, when the
Init event arrives, all the controls statically declared in the .aspx source file have been instantiated and hold their
default values. Controls can hook up the Init event to initialize any settings that will be needed during the lifetime of the incoming Web request. For example, at this time controls can load external template files or set up
the handler for the events. You should notice that no view state information is available for use yet
Immediately
after initialization, the page framework loads the view state for the page. The view state is a collection of name/value pairs, where controls and the page itself store any information that must be persistent across Web requests. The
view state represents the call context of the page. Typically, it contains the state of the controls the last time the page was processed on the server. The view state is empty the first time the page is requested in the session. By
default, the view state is stored in a hidden field silently added to the page. The name of this field is __VIEWSTATE. By overriding the LoadViewState method—a protected overridable method on the Control class—component developers can control how the view state is restored and how its contents are mapped to the internal state.

Methods like LoadPageStateFromPersistenceMedium and its counterpart SavePageStateToPersistenceMedium can be used to load and save the view state to an alternative storage
medium—for example, Session, databases, or a server-side file. Unlike LoadViewState, the aforementioned methods are available only in classes derived from Page.

Once the view state has been restored, the controls in the page tree are in the same state they were the last time the page was rendered to the browser. The next step consists of updating their state to incorporate client-side changes. The postback data-processing stage gives controls a chance to update their state so that it accurately reflects the state of the corresponding HTML element on the client. For example, a server TextBox control has its HTML counterpart in an <input type=text> element. In the postback data stage, the TextBox control will retrieve the
current value of <input> tag and use it to refresh its internal state. Each control is responsible for extracting values from posted data and updating some of its properties. The TextBox control
will update its Text property whereas the CheckBox control will refresh its Checked property. The
match between a server control and a HTML element is found on the ID of both.

At the end of the postback data processing stage, all controls in the page reflect the previous state updated with changes entered on the client. At this point, the Load event is fired to the page.

There might be controls in the page that need to accomplish certain tasks if a sensitive property is modified across two different requests. For example, if the text of a textbox control is modified on the client, the control fires the TextChanged event. Each control can take the decision to fire an appropriate event if one or more of its properties are modified with the values coming from the client. Controls for which these changes are critical implement the IPostBackDataHandler interface, whose LoadPostData method is invoked immediately after the Load event. By coding the LoadPostData method, a control verifies if any critical change has occurred since last request and fires its own change event.



The key event in the lifecycle of a page is when it is called to execute the server-side code associated with an event triggered on the client. When the user clicks a button, the page posts back. The collection of posted values contains the ID of the button that started the whole operation. If the control is known to implement the IPostBackEventHandler interface (buttons and link buttons will do), the page framework calls the RaisePostBackEvent method. What this method does depends on the type of the control. With regard to buttons and link buttons, the method looks up for a Click event handler and runs the associated
delegate.

After handling the postback event, the page prepares for rendering. This stage is signaled by the PreRender
event. This is a good time for controls to perform any last minute update operations that need to take place immediately before the view state is saved and the output rendered. The next state is SaveViewState,
in which all controls and the page itself are invited to flush the contents of their own ViewState collection. The resultant view state is then serialized, hashed, Base64 encoded, and associated with the
__VIEWSTATE hidden field.

The rendering mechanism of individual controls can be altered by overriding the Render method. The method takes an HTML writer object and uses it to accumulate all HTML text to be generated for the control. The default implementation of the Render method for the Page class consists of a recursive call to all
constituent controls. For each control the page calls the Render method and caches the HTML output.

The final sign of life of a page is the Unload event that arrives just before the page object is dismissed. In this event you should release any critical resource you might have (for example, files, graphical objects,
database connections).

Finally, after this event the browser receives the HTTP response packet and displays the page.

To conclude, The ASP.NET page object model is particularly innovative because of the eventing mechanism. A Web page is composed of controls that both produce a rich HTML-based user interface and interact with the user through events. Setting up an eventing model in the context of Web applications is challenging. It's amazing to see that client-side generated events are resolved with server-side code, and the output of this is visible as the same
HTML page, only properly modified.

To make sense of this model it is important to understand the various stages in the
page lifecycle and how the page object is instantiated and used by the HTTP run
time.

Saturday, December 16, 2006

Managed, Unmanaged, Native: What Kind of Code Is This?


Me and many of my colleagues are confused with the terms managed code, unmanaged code, Native code etc... So I decided to understand those concepts...

With the release of Visual Studio .NET 2003 (formerly known as Everett) on April 24th, many developers are now willing to consider using the new technology known as managed code. But especially for C++ developers, it can be a bit confusing.


What Is Managed Code?


Managed Code is what Visual Basic .NET and C# compilers create. It compiles to Intermediate Language (IL), not to machine code that could run directly on your computer. The IL is kept in a file called an assembly, along with metadata that describes the classes, methods, and attributes (such as security requirements) of the code you've created. This assembly is the one-stop-shopping unit of deployment in the .NET world. You copy it to another server to deploy the assembly there—and often that copying is the only step required in the deployment. Managed code runs in the Common Language Runtime. The runtime offers a wide variety of services to your running code. In the usual course of events, it first loads and verifies the assembly to make sure the IL is okay. Then, just in time, as methods are called, the runtime arranges for them to be compiled to machine code suitable for the machine the assembly is running on, and caches this machine code to be used the next time the method is called. (This is called Just In Time, or JIT compiling, or often just Jitting.)


As the assembly runs, the runtime continues to provide services such as security, memory management, threading, and the like. The application is managed by the runtime.


Visual Basic .NET and C# can produce only managed code. If you're working with those applications, you are making managed code. Visual C++ .NET can produce managed code if you like: When you create a project, select one of the application types whose name starts with .Managed., such as .Managed C++ application..


What Is Unmanaged Code?


Unmanaged code is what you use to make before Visual Studio .NET 2002 was released. Visual Basic 6, Visual C++ 6, heck, even that 15-year old C compiler you may still have kicking around on your hard drive all produced unmanaged code. It compiled directly to machine code that ran on the machine where you compiled it—and on other machines as long as they had the same chip, or nearly the same. It didn't get services such as security or memory management from an invisible runtime; it got them from the operating system. And importantly, it got them from the operating system explicitly, by asking for them, usually by calling an API provided in the Windows SDK. More recent unmanaged applications got operating system services through COM calls.


Unlike the other Microsoft languages in Visual Studio, Visual C++ can create unmanaged applications. When you create a project and select an application type whose name starts with MFC, ATL, or Win32, you're creating an unmanaged application.


This can lead to some confusion: When you create a .Managed C++ application., the build product is an assembly of IL with an .exe extension. When you create an MFC application, the build product is a Windows executable file of native code, also with an .exe extension. The internal layout of the two files is utterly different. You can use the Intermediate Language Disassembler, ildasm, to look inside an assembly and see the metadata and IL. Try pointing ildasm at an unmanaged exe and you'll be told it has no valid CLR (Common Language Runtime) header and can't be disassembled—Same extension, completely different files.


What about Native Code?


The phrase native code is used in two contexts. Many people use it as a synonym for unmanaged code: code built with an older tool, or deliberately chosen in Visual C++, that does not run in the runtime, but instead runs natively on the machine. This might be a complete application, or it might be a COM component or DLL that is being called from managed code using COM Interop or PInvoke, two powerful tools that make sure you can use your old code when you move to the new world. I prefer to say .unmanaged code. for this meaning, because it emphasizes that the code does not get the services of the runtime. For example, Code Access Security in managed code prevents code loaded from another server from performing certain destructive actions. If your application calls out to unmanaged code loaded from another server, you won't get that protection.


The other use of the phrase native code is to describe the output of the JIT compiler, the machine code that actually runs in the runtime. It's managed, but it's not IL, it's machine code. As a result, don't just assume that native = unmanaged.


Does Managed Code Mean Managed Data?


Again with Visual Basic and C#, life is simple because you get no choice. When you declare a class in those languages, instances of it are created on the managed heap, and the garbage collector takes care of lifetime issues. But in Visual C++, you get a choice. Even when you're creating a managed application, you decide class by class whether it's a managed type or an unmanaged type. This is an unmanaged type:class Foo{private: int x;public: Foo(): x(0){} Foo(int xx): x(xx) {}}; This is a managed type:__gc class Bar{private: int x;public: Bar(): x(0){} Bar(int xx): x(xx) {}}; The only difference is the __gc keyword on the definition of Bar. But it makes a huge difference. Managed types are garbage collected. They must be created with new, never on the stack. So this line is fine:Foo f; But this line is not allowed:Bar b; If I do create an instance of Foo on the heap, I must remember to clean it up:Foo* pf = new Foo(2);// . . .delete pf; The C++ compiler actually uses two heaps, a managed an unmanaged one, and uses operator overloading on new to decide where to allocate memory when you create an instance with new. If I create an instance of Bar on the heap, I can ignore it. The garbage collector will clean it up some after it becomes clear that no one is using it (no more pointers to it are in scope).There are restrictions on managed types: They can't use multiple inheritance or inherit from unmanaged types, they can't allow private access with the friend keyword, and they can't implement a copy constructor, to name a few. So, you might not want your classes to be managed classes. But that doesn't mean you don't want your code to be managed code.

Fundamentals of Windows Services


Windows services enable you to perform tasks that execute as different background processes. You can use Windows services to perform tasks, such as monitoring the usage of a database. A Windows service executes in its own process space until a user stops it, or the computer is shut down.
Windows services run as background processes. These applications do not have a user interface, which makes them ideal for tasks that do not require any user interaction. You can install a Windows service on any server or computer that is running Windows 2000, Windows XP, or Windows NT. You can also specify a Windows service to run in the security context of a specific user account that is different from the logged on user account or the default computer account. For example, you can create a Windows service to monitor performance counter data and react to threshold values in a database.


You create Windows services to perform tasks, such as managing network connections and monitoring resource access and utilization. You can also use Windows services to collect and analyze system usage data and log events in the system or custom event log. You can view the list of services running on a computer at any time by opening Administrative Tools from Control Panel and then opening Services. Figure 2.1 displays the Services window.
A Windows service is installed in the registry as an executable object. The Service Control Manager (SCM) manages all the Windows services. The Service Control Manager, which is a remote procedure call (RPC) server, supports the local or remote management of a service. You can create applications that control ?Windows services through the SCM by using Visual Studio .NET. The .NET Framework provides classes that enable you to create, install, and control Windows services easily.

The Windows service architecture consists of three components:

(a) Service application.
An application that consists of one or more services that provide the desired functionality

(b) Service controller application.
An application that enables you to control the behavior of a service

(c) Service Control Manager.
A utility that enables you to control the services that are installed on a computer

You can create Windows services by using any .NET language, such as Visual C# or Visual Basic .NET. The System.ServiceProcess namespace of the .NET ?Framework contains classes that enable you to create, install, implement, and ?control Windows services. You use the methods of the ServiceBase class to create a ?Windows service. After you create a Windows service application, you install it by registering the application in the registry of a computer. You use the ?ServiceInstaller and ServiceProcessInstaller classes to install Windows services. You can view all the registered service applications in the Windows registry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services. After you install a service, you need to start it. You use the ServiceController class and the SCM to start, stop, pause, and continue a Windows service. The ServiceController class also allows you to execute custom commands on a service.
After you start a Service application on a computer, it can then exist in a running, paused, or stopped state. In addition, a service can be in a pending state. A pending state indicates that a command, such as the command to pause a service, was issued but not completed.

Windows services are categorized based on the number of services that run in a process space. You can have one service running in one process space or multiple services sharing a process space. The services that use a single process space are called Win32OwnProcess services, whereas the services that share a process with other services are called Win32ShareProcess services.

Differences Between Service Applications and Other Visual Studio .NET Applications

Windows services are different from other Visual Studio .NET applications. The following sections discuss these differences.

Installing Windows Services

Unlike other Visual Studio projects, you need to install a service application before the service can run. To register and install the service, you first need to add installation components to a service application.

Debugging Windows Services

You cannot debug a service application by pressing the F5 or F11 key like you can with other types of Visual Studio .NET applications. To debug a service application, you need to install and start the service, and then attach a debugger to the process of the service. You will learn to debug a service application in Lesson 5.

Executing Windows Services

In a Windows service, the Run method loads a service into the SCM. You call the Run method from the Main method of a service application. Another difference between executing a Windows service and other Visual Studio .NET applications is that the dialog boxes raised from a Windows service are not visible to the user. In addition, the error messages generated from a Windows service are logged in the event logs. You can specify that Windows services run in their own security context. Therefore, a Windows service can start before a user logs on and continue to run even after the user logs off.

Programming Model of Windows Service Applications

You create Windows service applications by using the classes of the System.ServiceProcess namespace. You use the methods of the classes in this namespace to provide functionality to your service application


Application Performance

As server-side .NET development becomes more prevalent, Application Performance Management tools are becoming available to fine-tune .NET applications. And, as in the past, it seems that the ability to build new and more sophisticated applications always stays ahead of the ability to manage them.

So said Bernd Harzog, CEO and founder of APMExperts. That, he points out, is exactly the situation organizations are now facing as they try to manage their .NET applications.

With more organizations looking to .NET as a platform for enterprise applications, Harzog said this is the next big area for the Application Performance Management. He and the various vendors targeting this space see .NET Application Performance Management following a similar trajectory as the market - and the management challenges - for J2EE performance tools.

Application Performance Management tools provide the ability to monitor and manage the performance of the application from the user's perspective, and track the source of problems. "The true value doesn't come in understanding .NET, but in the end-to-end picture of your application," said Rob Greer, director of product marketing-Application Performance Management, at Symantec Corp., in Mountain View, Calif.

"The [.NET] challenges are similar to J2EE," said Steve Tack, director of product management at Compuware. "These applications are relatively new technology. The same things that make them powerful and sophisticated also yield different challenges. There are many moving parts and components.

While there was an earlier adoption rate for applications in the Java space, "we're definitely seeing an uptake in .NET," Tack said.

Steve Stover, product manager for .NET at Quest Software Inc., in Aliso Viejo, Calif., said organizations learned some lessons about J2EE Application Performance Management that they are leveraging for their .NET applications.

"We see more customers beginning the deployment of .NET applications being more proactive about putting management in place as soon as they put applications in production," Stover said. "There were some hard lessons learned from the Java side."

Application Performance Management on the Java side is now what Harzog dubs an "elephant war," whereas the .NET Application Performance Management market is embryonic, he said. "In the last four years you had a bunch of startups that built pretty good [J2EE Application Performance Management] products that got acquired by big companies like IBM, Compuware, BMC, CA," he said. "Now managing Web-based J2EE applications in production is a solved problem, and you can buy products from mainstream systems management vendor."

Harzog has a caveat, though: "From a customer perspective, is managing J2EE applications in production a brainless thing to do? No. It's still difficult, even with the best of tools. That said, applications built to J2EE are probably the most manageable applications because of all these tools developed to instrument the stack. It's not perfect, but it's as good as it gets."

Hon Wong, CEO of startup Symphoniq Corp., and cofounder of NetIQ, said Application Performance Management is more mature on the J2EE side "from a hype point of view," but said the tools aren't quite there yet for the ability to provide an end-to-end approach for managing day-to-day application performance. However, Wong agrees that on the .NET side, "it's an open market."

"The market is 95% unpenetrated," Harzog said. "Vendors are just beginning to get traction. And the awareness that the need to manage these applications is different than before is slowly dawning on people. The tools are maturing, but there are very few things to choose from."

Harzog said vendors with .NET Application Performance Management offerings include Symphonic, Wily/Computer Associates, Symantec, Quest, Compuware, and AVIcode.

Diverse app pools

It's a complicated problem though, Harzog said, particularly on the .NET side, where the applications are typically more diverse than on the J2EE side. "The overwhelming majority of J2EE applications used in big-time situations are Web based, and the overwhelming architecture is a Web server, a J2EE application server, and a database server on the back end. If you've built J2EE applications to that set of components you've got a lot of choices with products that can measure transaction timing," he said.

In contrast, .NET applications tend to be more diverse, he said, as the .NET framework allows developers to include older applications rather than rewriting from scratch. "If I listed all the different ways in which Windows applications have been built since Windows existed, I would run out of time.

.NET allows you to include and reuse what you built in the past without rewriting it, but if you decide to go to .NET and include pieces of old applications, you've created a situation where [many] different things need to be managed."
The problem, Harzog said, is "a .NET-specific management tool can't penetrate the legacy code. As a developer you made a tradeoff without knowing it--you can get the application into production more quickly if you reuse legacy pieces, but you made it more difficult to manage."

Making the problem "exponentially more complex" is the growing trend toward Web services and service-oriented architecture (SOA), Harzog said. Now organizations can use someone else"s functionality wrapped in a standard interface. "You can build an application that has all kinds of external dependencies; it"s dependent on a bunch of stuff behind the API that you have no visibility into. SOA and Web services create diversity in terms of the external things you have to talk to and deal with in the code, on top of a diverse internal environment."

A lot of customers and vendors are now struggling to solve these issues, Harzog said. The promise of .NET Application Performance Management tools is that they will allow IT operations staff, who aren"t necessarily skilled in .NET, to identify and solve some issues on their own without involving the developers. Or for issues that require the developer"s expertise, the tools aim to provide enough information so the time it takes to address the performance problem is reduced.

Developer role
While developers do have a responsibility for creating applications that perform well and meet requirements, "performance and the management of applications are often secondary to delivering functional capability," said Quest"s Stover. As a result, once an application gets into production, the developers often fall into an application support role because they have the knowledge of the application and how it runs, he said.

The pressure to produce new applications is in large part why the Application Response Management (ARM) standard has not gained traction despite being around for more than a half-dozen years, Harzog said. "ARM is set of APIs and instrumentation that all developers are supposed to put in all applications. All the systems management vendors said it was the holy grail. What happened? No one did it. People who build custom applications for a living are under unrelenting pressure to put more features into applications in less time."

Symphoniq's Wong agrees. "Certainly if you have the time and budget to "ARM" it, it's a great way of doing it, but who has the time and budget? People are under time constraints; they usually don"t spec in monitoring capability." And when a developer tries to put instrumentation into an application, it's just guesswork, he said. "You're just guessing that this part may be more susceptible than others, but you usually guess wrong."

Wong and his competitors in the .NET Performance management tools space are trying to take the guesswork out of the developer"s job. The second part of this story will examine what Symphoniq and others have to offer, and what organizations should look for in a solution.