Total Pageviews
Tuesday, August 28, 2012
Instance Deactivation - an Optimization technique in WCF
Figure 10 Contexts and Instances
Sessions actually correlate the client messages not to the instance, but to the context that hosts it. When the session starts, the host creates a new context. When the session ends, the context is terminated. By default, the lifeline of the context is the same as that of the instance it hosts. However, for optimization purposes, Windows Communication Foundation provides the service designer with the option of separating the two lifelines and deactivating the instance separately from its context. In fact, Windows Communication Foundation also allows the situation of a context that has no instance at all. I call this instance management technique context deactivation. The usual way for controlling context deactivation is through the ReleaseInstanceMode property of the OperationBehavior attribute:
Instead of making a design-time decision on which methods to use to deactivate the instance, you can make a run-time decision to deactivate the instance after the method returns. You do that by calling the ReleaseServiceInstance method on the instance context.
Instance deactivation is an optimization technique, and like all such techniques it should be avoided in general cases. Consider using instance deactivation only after failing to meet both your performance and scalability goals and when careful examination and profiling has proven beyond a doubt that using it will improve the situation. If scalability and throughput are your concern, choose the simplicity of the per-call instancing mode, and avoid instance deactivation.
http://msdn.microsoft.com/en-us/magazine/cc163590.aspx#S8
Demarcating Operations in WCF
Sometimes when dealing with session contracts there is an implied order to operation invocations. Some operations cannot be called first while other operations must be called last.
For example, consider this contract used to manage customer orders:
The contract has the following constraints: the client must first provide the customer ID against which items are added; then the total is calculated. When the order processing is complete, the session is terminated.
Windows Communication Foundation allows contract designers to designate contract operations as operations that can or cannot start or terminate the session using the IsInitiating and IsTerminating properties of the OperationContract attribute:
For example, consider this contract used to manage customer orders:
The contract has the following constraints: the client must first provide the customer ID against which items are added; then the total is calculated. When the order processing is complete, the session is terminated.
Windows Communication Foundation allows contract designers to designate contract operations as operations that can or cannot start or terminate the session using the IsInitiating and IsTerminating properties of the OperationContract attribute:
By default, operations do not demarcate the session boundary—they can be called first, last, or between any other operation in the session. Using non-default values enables you to dictate that a method is not called first, or that it is called last, or both, to enforce the interaction constraints.
When IsInitiating is set to true (the default), it means the operation will start a new session if it is the first method called by the client, but that it will be part of the ongoing session if another operation is called first. When IsInitiating is set to false, it means the operation can never be called as the first operation by the client in a new session, and the method can only be part of an ongoing session.
When IsTerminating is set to false (the default), the session continues after the operation returns. When IsTerminating is set to true, the session terminates once the method returns, and the client will not be able to issue additional calls on the proxy. Note that the client must still close the proxy because the operation does not dispose of the service instance—it simply rejects subsequent calls.
System.ServiceModel isthe assemblythatcontainscorefunctionalityforWCF,which explains why the WCF
platform is often called the servicemodel. Any projectthatexposesorconsumes WCFservicesmustreferencetheSystem.ServiceModelassembly,andpossiblyother
supporting assemblies.
platform is often called the servicemodel. Any projectthatexposesorconsumes WCFservicesmustreferencetheSystem.ServiceModelassembly,andpossiblyother
supporting assemblies.
WCF- Instance and Concurrency Management
When we design our enterprise application, we need to provide a great scalability, performance, throughput, transactions, reliability etc. Sincerely there is no one-fit solution to solve all our needs, but WCF can help us solving all those architectural requirements using different techniques. One of those technique is Instance and Concurrency Management, available through WCF service behaviors, discussed in this article.
Behaviors are classes which are used in WCF runtime operations and basically there are three types of behaviors:
- Service behaviors - control items such as instancing and transactions
- Endpoint behaviors - used for inspecting incoming or outgoing messages
- Operation behavior - well suited for manipulating serialization, transaction flow and parameter handling for a service operation
Concurrency and Instancing
One of the great thing in WCF is actually the opportunity to increase the throughput of the service by increasing the concurrency meaning executing different tasks in parallel. WCF can control concurrency by the following two behaviors: InstanceContextMode and ConcurrencyMode.
Instance Management is a set of techniques helping us to bind all client requests to service instances governing which instance handles which request. In order to get familiar with all instance management modes we should take a brief overview on all of them. Basically there are three instance modes in WCF:
- Per-Session instance mode
- Per-Call instance mode
- Singleton Instance Mode
- Single - This is the default setting and instruct the runtime to allow access on one thread per instance of the service class. This setting is the safest one because service operations do not need to worry about thread safety.
- Reentrant - Only one thread at a time can access the service class, but the thread can leave the class and come back later to continue.
- Multiple - Multiple threads may access the service class simultaneously. This setting requires the class to be built in a thread-safe manner.
Sunday, August 26, 2012
Data Contracts and serialization
WCF has DataContractSerializer as its own dataformatter. It captures only the state of the object according to the serialization nor the data contract schema.
.Net Formatters - -2 types for serialziing and deserialying types.
Binay Formatter -- serilizes into a comact binat format, enabling fast seriazlianton and deserialzation.
Soap Foratter -- uses .net specific SOAP XML format. bith support IFormatter interface.
DataContractSerializer fromater does not support IFormatter.
NetDataContractSerializer is similar to .Net formatters the NetDataContractSerializercaptures the type infor and also the state of the object.Its a compliemnt to DataContractSerialize.
This capability of using NetDataContractSerializer to serialize and DataContractSerializer to deserialize opens a way for versioning tolerance and for migrating the legacycode that shares type infor into more soa approach where only the dtat schema is maintained.
Data Contracts
Data Contract attribute is used at the class level and to serialize its memebres we use dataMember attribute.
When a data contract is used in the contract operation, it is published in the service metadata.
.Net Formatters - -2 types for serialziing and deserialying types.
Binay Formatter -- serilizes into a comact binat format, enabling fast seriazlianton and deserialzation.
Soap Foratter -- uses .net specific SOAP XML format. bith support IFormatter interface.
DataContractSerializer fromater does not support IFormatter.
NetDataContractSerializer is similar to .Net formatters the NetDataContractSerializercaptures the type infor and also the state of the object.Its a compliemnt to DataContractSerialize.
This capability of using NetDataContractSerializer to serialize and DataContractSerializer to deserialize opens a way for versioning tolerance and for migrating the legacycode that shares type infor into more soa approach where only the dtat schema is maintained.
Data Contracts
Data Contract attribute is used at the class level and to serialize its memebres we use dataMember attribute.
When a data contract is used in the contract operation, it is published in the service metadata.
Saturday, August 25, 2012
Transport Level Sessions
In WCF since the client sends a msg to the service and never invokes the instance direclty, the direct association of the object with the client is not possible, so we use the transport session, whcih ensures that all athe messages coming from a particular client are sent to the same trasport channel on the host.
It is as if the cliient and the channel maintain a logical session at the transport level.
The transport session is an optional and is unrelated to the application level and is an aspect of the binding configuration.
Transport session is one of the key fundamental concepts of WCF affecting reliabilty, instance mgmt, error mgmt, synchronization, transactions and security.
It relies on WCF's abiltiy to identify the client and correlate all its msgs to a particular channel.
It is as if the cliient and the channel maintain a logical session at the transport level.
The transport session is an optional and is unrelated to the application level and is an aspect of the binding configuration.
Transport session is one of the key fundamental concepts of WCF affecting reliabilty, instance mgmt, error mgmt, synchronization, transactions and security.
It relies on WCF's abiltiy to identify the client and correlate all its msgs to a particular channel.
Friday, August 24, 2012
Host Architecture
It is important to explore how the transition is made from a technology-neutral , soa interation to CLR intreface and classes.
The host performs the brigdging.
Each .Net host process can have many app domians, each app domain can have more service host instances, each service host instance will be dedidcated to a particular service type. thus , when u create a host instance, u are in effect registering that service host instance with all the endpoints for that type on the host machinge that corresponsd to its base address.
Each host instance has one or more context which is the innermost execution scope of the service insance. it is the combined effect of the service hsost and the context that expoes a native CLR type as a service. after the message is passed through the cahnnel the host maps that message to a new or existingn context and the object instance inside and lets process the call.
The host performs the brigdging.
Each .Net host process can have many app domians, each app domain can have more service host instances, each service host instance will be dedidcated to a particular service type. thus , when u create a host instance, u are in effect registering that service host instance with all the endpoints for that type on the host machinge that corresponsd to its base address.
Each host instance has one or more context which is the innermost execution scope of the service insance. it is the combined effect of the service hsost and the context that expoes a native CLR type as a service. after the message is passed through the cahnnel the host maps that message to a new or existingn context and the object instance inside and lets process the call.
WCF Architecture
WCF offers surpport for reliabilty, transactions, concurrency mgmt, security and instance activation, all of which rely on the wcf interception based architecture.Having a client interact with service(proxy) means that wcf is alwyas present between the client and the service , intercepting the call and performing precall and pos call processing.the interception starts whne the proxy serializes the call stack frame to the message and send the message down a chanin of cahnnels. each clients ide channel does a precall processing of the message. the last cahnne is the transport channel, whcih sends the messagge over the configured transport to the host.
on the host side, the msg goes thru another chain of channels theat perform host sie precall processing of the message. the first channel being transport channel which reciees he message from client transport. subsequent channel perform various task, like decryption of the message body, decoding of the mesg,joining the propgated transaction,setting the security principal, managing the session, and activating the service instance.
The last channel passes the msg to the dispatcher. the dispatcher convers the mes to the stack frame and calls the service instance.
The interception both on the client and service side ensurers theat the client and the service get the runtime environment they require to operate properly.
The service instance execules the call and returns control to the dispatcher which then converts the returned values and error inform into ta return messagae. the dispatcher then passes mesage to the hostside channels to perform post call processingg such as managing the trancasction, deactivating the instance, encoding the reply, encrypting it .. the returned mes goes the transport channel on the client side for post call processing. whcih consists of decrypting , decoding, committing or aborting the transcation... the last channel passes the message to the proxy , which converts the returned mesg to ta stack frame and returns the control to the client.
on the host side, the msg goes thru another chain of channels theat perform host sie precall processing of the message. the first channel being transport channel which reciees he message from client transport. subsequent channel perform various task, like decryption of the message body, decoding of the mesg,joining the propgated transaction,setting the security principal, managing the session, and activating the service instance.
The last channel passes the msg to the dispatcher. the dispatcher convers the mes to the stack frame and calls the service instance.
The interception both on the client and service side ensurers theat the client and the service get the runtime environment they require to operate properly.
The service instance execules the call and returns control to the dispatcher which then converts the returned values and error inform into ta return messagae. the dispatcher then passes mesage to the hostside channels to perform post call processingg such as managing the trancasction, deactivating the instance, encoding the reply, encrypting it .. the returned mes goes the transport channel on the client side for post call processing. whcih consists of decrypting , decoding, committing or aborting the transcation... the last channel passes the message to the proxy , which converts the returned mesg to ta stack frame and returns the control to the client.
WCF Course Content
- Windows Communication Foundation (WCF)
- WCF Architecture
- Channels, Bindings
- Messages
- Serialization
- Contracts
- Faults
- Callbacks
- Behaviors
- Hosting
- Diagnostics
- Secure Communication
- Authorization
- Reliable Messaging
- Queues
- Transaction
http://www.chinnasoft.com/Course/wcf.pdf
WCF Essentials
WCF Overview
SOA Overview
WCF architecture
Essential WCF concepts:
SOA Overview
WCF architecture
Essential WCF concepts:
- Addresses
- Contracts
- Bindings
- Endpoints
- Hosting
- Clients
Contracts
Designing and working with service contracts
Contract overloading and inheritance
Data Contracts
Serialization
Attributes
Versioning
Collections & Generics
Contract overloading and inheritance
Data Contracts
Serialization
Attributes
Versioning
Collections & Generics
Instance Management & Operation
Behaviours
Per-Call Services
Per-Session Services
Singleton Service
Demarcating Operations
Instance Deactivation
Throttling
Operations:
Per-Call Services
Per-Session Services
Singleton Service
Demarcating Operations
Instance Deactivation
Throttling
Operations:
- Request-Reply
- One-Way
- Callback
- Events
- Streaming
Faults
Errors and exceptions
Fault Contracts
Error handling Extensions
Fault Contracts
Error handling Extensions
Transactions
Transaction Propagation
Protocols and Managers
The Transaction Class
Declarative Programming
Explicit Transaction Programming
With Instance management
Callbacks
Protocols and Managers
The Transaction Class
Declarative Programming
Explicit Transaction Programming
With Instance management
Callbacks
Security
Authentication & Authorization
Transfer Security
Scenario-Driven Approach
Transfer Security
Scenario-Driven Approach
Concurrency Management
Service Concurrency Mode
Instance Management and Concurrency
Deadlocked Avoidance
Synchronization Context
Callbacks
Instance Management and Concurrency
Deadlocked Avoidance
Synchronization Context
Callbacks
Queued Services
Disconnected Services and Clients
Queued Vs Connected Calls
Queued Vs Connected Calls
REST and POX
Consuming WCF
Ado.Net Data Services
WCF RIA ServicesContracts
Consuming WCF
Ado.Net Data Services
WCF RIA ServicesContracts
Windows Communication Foundation Training Course Outline
Introduction to WCF:
Overview of SOA; WCF architecture; Services, contracts, and addresses; Hosting; Bindings; Endpoints; Metadata exchange; Configuration; Implementing and consuming a serviceDefining Service Contracts:
Mapping operations to methods; Overloading operations; Using inheritance; Best practices; Querying contracts; Message contracts; Implementing catch-all contractsDefining Data Contracts:
What is a data contract? Serialization issues; Using data contract attributes; Versioning data contracts; Using data sets and tables; Using collections and genericsDefining Endpoints and Behaviors:
Defining multiple endpoints; Adding behaviors to services and endpoints; Calling non-WCF services; Managing service instances: per-call, per-session, and singleton; Throttling callsHandling Faults:
Overview of service-level faults; Defining fault contracts; Handling exceptions at the clientDiscovery:
Overview of WS-Discovery; Simple ad-hoc service discovery; Using scope when discovering endpoints; Service announcementsRouting:
Overview of RoutingService; Hosting the RoutingService; Configuring the RoutingService with message filters; Content-based routing; Protocol bridging; Error handling; Multicast routingManaging Operations and Concurrency:
Overview of message exchange patterns (MEPs); Defining synchronous request-reply operations; Defining one-way operations; Defining asynchronous call-back operations; Service synchronization; Managing events; StreamingManaging Transactions:
The role of transactions in SOA; Implementing transactional operations; Transaction management and propagationManaging Security:
Security concepts; Binding security; Specifying credentials; Obtaining security information; Application scenarios: intranet, Internet, B2B, anonymous clients; Federated security and WIFQueued Services:
Brief Overview of queued services;RESTful Services:
Overview of REST; REST bindings in WCF; Implementing RESTful services; Consuming RESTful services; CachingWorkflow Services:
role of WF in WCF; Creating and hosting a workflow service; Managing workflow instances remotely; Using workflow activitiesMessaging & Routing:
Brief Overview of messaging and routing;++++++++++++++++++++++
|
|||
Course Number: WCF-202
Duration: 3 days view class outline WCF Training OverviewAccelebrate's Windows Communication Foundation (WCF) training class teaches attendees the essential concepts of WCF and how to implement WCF services and clients. The course uses .NET 4.0 and Visual Studio 2010.
Location and Pricing
Most Accelebrate courses are taught on-site at our
clients' locations worldwide for groups of 3 or more attendees and are
customized to their specific needs. Please visit our client list
to see organizations for whom we have recently delivered training.
These courses can also be delivered as live, private online classes for
groups that are geographically dispersed or wish to save on the
instructor's or students' travel expenses. To receive a customized
proposal and price quote private training at your site or online, please
contact us. In addition, some courses are available as live, online classes for individuals. To see a schedule of online courses, please visit http://www.accelebrate.com/online_training/?action=category&page=winforms. WCF Training PrerequisitesStudents in this WCF 4.0 with C# training class should have a good working knowledge of building .NET applications with C#. Knowledge of building distributed systems and Web services will also be an advantage.Hands-on/Lecture RatioThis WCF training class is 70% hands-on, 30% lecture, with the longest lecture segments lasting 20 minutes.WCF Training MaterialsAll WCF 4.0 with C# training students receive more than 300 pages of comprehensive courseware and a related textbook.Software Needed on Each Student PC
WCF Training ObjectivesAll attendees will learn how to:
|
|||
WCF Training Outline
|
Monday, August 20, 2012
UNC Naming Conventions
File and Directory Names
All file systems follow the same general naming conventions for an individual file: a base file name and an optional extension, separated by a period. However, each file system, such as NTFS, CDFS, exFAT, UDFS, FAT, and FAT32, can have specific and differing rules about the formation of the individual components in the path to a directory or file. Note that a directory is simply a file with a special attribute designating it as a directory, but otherwise must follow all the same naming rules as a regular file. Because the term directory simply refers to a special type of file as far as the file system is concerned, some reference material will use the general term file to encompass both concepts of directories and data files as such. Because of this, unless otherwise specified, any naming or usage rules or examples for a file should also apply to a directory. The term path refers to one or more directories, backslashes, and possibly a volume name. For more information, see the Paths section.Character count limitations can also be different and can vary depending on the file system and path name prefix format used. This is further complicated by support for backward compatibility mechanisms. For example, the older MS-DOS FAT file system supports a maximum of 8 characters for the base file name and 3 characters for the extension, for a total of 12 characters including the dot separator. This is commonly known as an 8.3 file name. The Windows FAT and NTFS file systems are not limited to 8.3 file names, because they have long file name support, but they still support the 8.3 version of long file names.
Naming Conventions
The following fundamental rules enable applications to create and process valid names for files and directories, regardless of the file system:- Use a period to separate the base file name from the extension in the name of a directory or file.
- Use a backslash (\) to separate the components of a path. The backslash divides the file name from the path to it, and one directory name from another directory name in a path. You cannot use a backslash in the name for the actual file or directory because it is a reserved character that separates the names into components.
- Use a backslash as required as part of volume names, for example, the "C:\" in "C:\path\file" or the "\\server\share" in "\\server\share\path\file" for Universal Naming Convention (UNC) names. For more information about UNC names, see the Maximum Path Length Limitation section.
- Do not assume case sensitivity. For example, consider the names OSCAR, Oscar, and oscar to be the same, even though some file systems (such as a POSIX-compliant file system) may consider them as different. Note that NTFS supports POSIX semantics for case sensitivity but this is not the default behavior. For more information, see CreateFile.
- Volume designators (drive letters) are similarly case-insensitive. For example, "D:\" and "d:\" refer to the same volume.
- Use a period as a directory component in a path to represent the current directory, for example ".\temp.txt". For more information, see Paths.
- Use two consecutive periods (..) as a directory component in a path to represent the parent of the current directory, for example "..\temp.txt". For more information, see Paths.
- To get the 8.3 form of a long file name, use the GetShortPathName function.
- To get the long file name version of a short name, use the GetLongPathName function.
- To get the full path to a file, use the GetFullPathName function.
Fully Qualified vs. Relative Paths
For Windows API functions that manipulate files, file names can often be relative to the current directory, while some APIs require a fully qualified path. A file name is relative to the current directory if it does not begin with one of the following:- A UNC name of any format, which always start with two backslash characters ("\\"). For more information, see the next section.
- A disk designator with a backslash, for example "C:\" or "d:\".
- A single backslash, for example, "\directory" or "\file.txt". This is also referred to as an absolute path.
Maximum Path Length Limitation
In the Windows API (with some exceptions discussed in the following paragraphs), the maximum length for a path is MAX_PATH, which is defined as 260 characters.Optimize IIS Performance (IIS 7)
IIS 7 provides a powerful, unified facility for output caching by integrating the dynamic output-caching capabilities of ASP.NET with the static output-caching capabilities that were present in IIS 6.0. IIS also lets you use bandwidth more effectively and efficiently by using common compression mechanisms such as Gzip and Deflate. Performance includes the following features:
- Compression
- Output Caching
HTTP compression lets you make more efficient use of bandwidth and enhances the performance of sites and applications. You can configure HTTP compression for both static and dynamic sites.
Enable Output Caching (IIS 7)
The procedures for configuring output caching can be performed at the following levels in IIS:
The necessary modules and handlers must be installed on the Web server and enabled at the level at which you perform this procedure.
The following modules are required:
If you perform this procedure by using IIS Manager, you must be a member of the following IIS administrative role or roles:
If you perform this procedure by using the Appcmd.exe, running WMI scripts, or editing configuration files, you must have write access to the target configuration file or files.
You can improve performance on your site or application by enabling output caching. Caching decreases the amount of processing time for requests made to your site or application by returning a processed copy of a Web page from the cache.
You should enable output caching if your site or application content requires complex or lengthy processing. For example, you might want to enable caching if your application retrieves information from a database. This will let you avoid making a call to the database every time that a particular Web page is requested. In addition to enabling output caching, you must also set up output cache rules to specify how you want content to be cached.
Optimize Output Caching for Dynamic Web Pages (IIS 7)
Internet Information Services (IIS) 7.0 has an output cache feature that caches dynamic content in memory (for example, output from your Microsoft® ASP.NET, classic Active Server Pages (ASP), PHP, or other dynamic pages). This helps to improve performance because the script used to generate dynamic output does not need to run for each request. The cache is able to vary the output that is cached, based on query string values and HTTP headers that are sent from the client to the server. The cache is also integrated with the HTTP.sys kernel mode driver to help improve performance speed.
IIS automatically caches static content (HTML pages, images, and style sheets) since these types of content do not change from request to request. IIS also detects changes in updated files and flushes the cache as needed.
The output from dynamic pages can now be cached in memory as well. However, not every dynamic page can use the output cache effectively. Pages that can be personalized, such as shopping cart or e-commerce transactions, cannot use the output cache because the dynamic output will probably not be requested repeatedly. Content output that results from a POST-type request to an HTML form also cannot be cached.
The output cache works well for pages that are semi-dynamic in nature, for example, when data is generated dynamically but is not likely to change from request to request based on the URL or the header information. Photo gallery applications, for instance, dynamically resize images for display on Web pages and can use the output cache to prevent the server from having to reprocess image resizing for each request.
http://technet.microsoft.com/en-us/library/dd239248(v=ws.10)
IIS supports two types of invalidation schemes for dynamic content. The first is a simple time-out period, using the configuration property CacheForTimePeriod. The other way to invalidate the cache is for IIS to detect a change to the underlying resource. The configuration property for this is CacheUntilChange. Use this type of invalidation scheme
Output caching allows you to manage output caching rules and to control the caching of served content. In IIS Manager, you can create caching rules, edit existing caching rules, and configure output cache settings.
http://technet.microsoft.com/en-us/library/cc771003(v=ws.10)
If your sites use lots of bandwidth, or if you want to use bandwidth more effectively, enable compression to provide faster transmission times between IIS and compression-enabled browsers. If your network bandwidth is restricted, as it is, for example, with mobile phones, compression can improve performance.
IIS provides the following compression options:
Unlike dynamic responses, compressed static responses can be cached without degrading CPU resources.
http://technet.microsoft.com/en-us/library/cc771003(v=ws.10)
If your sites use lots of bandwidth, or if you want to use bandwidth more effectively, enable compression to provide faster transmission times between IIS and compression-enabled browsers. If your network bandwidth is restricted, as it is, for example, with mobile phones, compression can improve performance.
IIS provides the following compression options:
- Static files only
- Dynamic application responses only
- Both static files and dynamic application responses
Unlike dynamic responses, compressed static responses can be cached without degrading CPU resources.
The procedures for configuring output caching can be performed at the following levels in IIS:
- Web Server
- Site
- Application
- Physical and virtual directories
- File (URL)
The necessary modules and handlers must be installed on the Web server and enabled at the level at which you perform this procedure.
Note |
---|
Modules can be enabled at only the Web server, site, and application level, but handlers can be enabled at all levels. |
- FileCacheModule
- HTTPCacheModule
- SiteCacheModule
- TokenCacheModule
- UriCacheModule
- None
If you perform this procedure by using IIS Manager, you must be a member of the following IIS administrative role or roles:
- Web Server Administrator
- Site Administrator
- Application Administrator
Note |
---|
If you are an IIS Manager user, you might not be able to perform this procedure if the related configuration elements are locked. |
You can improve performance on your site or application by enabling output caching. Caching decreases the amount of processing time for requests made to your site or application by returning a processed copy of a Web page from the cache.
You should enable output caching if your site or application content requires complex or lengthy processing. For example, you might want to enable caching if your application retrieves information from a database. This will let you avoid making a call to the database every time that a particular Web page is requested. In addition to enabling output caching, you must also set up output cache rules to specify how you want content to be cached.
Optimize Output Caching for Dynamic Web Pages (IIS 7)
Internet Information Services (IIS) 7.0 has an output cache feature that caches dynamic content in memory (for example, output from your Microsoft® ASP.NET, classic Active Server Pages (ASP), PHP, or other dynamic pages). This helps to improve performance because the script used to generate dynamic output does not need to run for each request. The cache is able to vary the output that is cached, based on query string values and HTTP headers that are sent from the client to the server. The cache is also integrated with the HTTP.sys kernel mode driver to help improve performance speed.
IIS automatically caches static content (HTML pages, images, and style sheets) since these types of content do not change from request to request. IIS also detects changes in updated files and flushes the cache as needed.
The output from dynamic pages can now be cached in memory as well. However, not every dynamic page can use the output cache effectively. Pages that can be personalized, such as shopping cart or e-commerce transactions, cannot use the output cache because the dynamic output will probably not be requested repeatedly. Content output that results from a POST-type request to an HTML form also cannot be cached.
The output cache works well for pages that are semi-dynamic in nature, for example, when data is generated dynamically but is not likely to change from request to request based on the URL or the header information. Photo gallery applications, for instance, dynamically resize images for display on Web pages and can use the output cache to prevent the server from having to reprocess image resizing for each request.
http://technet.microsoft.com/en-us/library/dd239248(v=ws.10)
IIS supports two types of invalidation schemes for dynamic content. The first is a simple time-out period, using the configuration property CacheForTimePeriod. The other way to invalidate the cache is for IIS to detect a change to the underlying resource. The configuration property for this is CacheUntilChange. Use this type of invalidation scheme
Application Pool in IIS 7.0
An application pool is a group of one or more URLs that are served by a worker process or a set of worker processes. Application pools set boundaries for the applications they contain, which means that any applications that are running outside a given application pool cannot affect the applications in the application pool.
Application pools offer the following benefits:
Most managed applications should run successfully in application pools with integrated mode, but you may have to run in classic mode for compatibility reasons. Test the applications that are running in integrated mode first to determine whether you really need classic mode.
http://technet.microsoft.com/en-us/library/cc753449(v=ws.10)
You can monitor and improve application pool health by having the Windows Process Activation Service (WAS) ping an application pool's worker process at set intervals.
A lack of response from the worker process might mean that the worker process does not have a thread to respond to the ping request, or that it is hanging for some other reason. Based on the results of the ping request, WAS can flag a worker process as unhealthy and shut it down.
By default, worker process pinging is enabled. You may have to adjust the ping interval and the ping response time to gain access to timely information about application pool health without triggering false unhealthy conditions, for example, instability caused by an application.
The identity of an application pool is the name of the service account under which the application pool's worker process runs. By default, application pools operate under the Network Service user account, which has low-level user rights. You can configure application pools to run under one of the built-in user accounts in the Windows Server® 2008 operating system. For example, you can specify the Local System user account, which has higher-level user rights than either the Network Service or Local Service built-in user accounts. However, remember that running an application pool under an account that has high-level user rights is a serious security risk.
You can also configure a custom account to serve as an application pool's identity. Any custom account you choose should have only the minimum rights that your application requires. A custom account is useful in the following situations:
Application pools offer the following benefits:
- Improved server and application performance. You can assign resource-intensive applications to their own application pools so that the performance of other applications does not decrease.
- Improved application availability. If an application in one application pool fails, applications in other application pools are not affected.
- Improved security. By isolating applications, you reduce the chance that one application will access the resources of another application.
Most managed applications should run successfully in application pools with integrated mode, but you may have to run in classic mode for compatibility reasons. Test the applications that are running in integrated mode first to determine whether you really need classic mode.
http://technet.microsoft.com/en-us/library/cc753449(v=ws.10)
You can monitor and improve application pool health by having the Windows Process Activation Service (WAS) ping an application pool's worker process at set intervals.
Note |
---|
Worker process pinging differs from Internet Control Message Protocol (ICMP) pinging. Instead, it uses an internal communication channel between the WAS and the worker process. |
By default, worker process pinging is enabled. You may have to adjust the ping interval and the ping response time to gain access to timely information about application pool health without triggering false unhealthy conditions, for example, instability caused by an application.
The identity of an application pool is the name of the service account under which the application pool's worker process runs. By default, application pools operate under the Network Service user account, which has low-level user rights. You can configure application pools to run under one of the built-in user accounts in the Windows Server® 2008 operating system. For example, you can specify the Local System user account, which has higher-level user rights than either the Network Service or Local Service built-in user accounts. However, remember that running an application pool under an account that has high-level user rights is a serious security risk.
You can also configure a custom account to serve as an application pool's identity. Any custom account you choose should have only the minimum rights that your application requires. A custom account is useful in the following situations:
- When you want to improve security and make it easier to trace security events to the corresponding application.
- When you are hosting Web sites for multiple customers on a single Web server. If you use the same process account for multiple customers, source code from one customer's application may be able to access source code from another customer's application. In this case, you should also configure a custom account for the anonymous user account.
- When an application requires rights or permissions in addition to the default permissions for an application pool. In this case, you can create an application pool and assign a custom identity to the new application pool.
Connection Pooling in ADO.Net
http://technet.microsoft.com/en-us/library/8xx3tyca(v=vs.110).aspx
Connecting to a database server typically consists of several time-consuming steps. A physical channel such as a socket or a named pipe must be established, the initial handshake with the server must occur, the connection string information must be parsed, the connection must be authenticated by the server, checks must be run for enlisting in the current transaction, and so on.
In practice, most applications use only one or a few different configurations for connections. This means that during application execution, many identical connections will be repeatedly opened and closed. To minimize the cost of opening connections, ADO.NET uses an optimization technique called connection pooling.
Connection pooling reduces the number of times that new connections must be opened. The pooler maintains ownership of the physical connection. It manages connections by keeping alive a set of active connections for each given connection configuration. Whenever a user calls Open on a connection, the pooler looks for an available connection in the pool. If a pooled connection is available, it returns it to the caller instead of opening a new connection. When the application calls Close on the connection, the pooler returns it to the pooled set of active connections instead of closing it. Once the connection is returned to the pool, it is ready to be reused on the next Open call.
Only connections with the same configuration can be pooled. ADO.NET keeps several pools at the same time, one for each configuration. Connections are separated into pools by connection string, and by Windows identity when integrated security is used. Connections are also pooled based on whether they are enlisted in a transaction. When using ChangePassword, the SqlCredential instance affects the connection pool. Different instances of SqlCredential will use different connection pools, even if the user ID and password are the same.
Pooling connections can significantly enhance the performance and scalability of your application. By default, connection pooling is enabled in ADO.NET. Unless you explicitly disable it, the pooler optimizes the connections as they are opened and closed in your application. You can also supply several connection string modifiers to control connection pooling behavior.
Connecting to a database server typically consists of several time-consuming steps. A physical channel such as a socket or a named pipe must be established, the initial handshake with the server must occur, the connection string information must be parsed, the connection must be authenticated by the server, checks must be run for enlisting in the current transaction, and so on.
In practice, most applications use only one or a few different configurations for connections. This means that during application execution, many identical connections will be repeatedly opened and closed. To minimize the cost of opening connections, ADO.NET uses an optimization technique called connection pooling.
Connection pooling reduces the number of times that new connections must be opened. The pooler maintains ownership of the physical connection. It manages connections by keeping alive a set of active connections for each given connection configuration. Whenever a user calls Open on a connection, the pooler looks for an available connection in the pool. If a pooled connection is available, it returns it to the caller instead of opening a new connection. When the application calls Close on the connection, the pooler returns it to the pooled set of active connections instead of closing it. Once the connection is returned to the pool, it is ready to be reused on the next Open call.
Only connections with the same configuration can be pooled. ADO.NET keeps several pools at the same time, one for each configuration. Connections are separated into pools by connection string, and by Windows identity when integrated security is used. Connections are also pooled based on whether they are enlisted in a transaction. When using ChangePassword, the SqlCredential instance affects the connection pool. Different instances of SqlCredential will use different connection pools, even if the user ID and password are the same.
Pooling connections can significantly enhance the performance and scalability of your application. By default, connection pooling is enabled in ADO.NET. Unless you explicitly disable it, the pooler optimizes the connections as they are opened and closed in your application. You can also supply several connection string modifiers to control connection pooling behavior.
Engagement Model
Engagement Model
Delivering projects using methodologies that match the drivers behind the project, ranging from Agile for more evolutionary projects to waterfall to standard projects where the percentage of unknowns are relatively fewer.- Agile Methodology: Agile Development Model is based on iterative development, wherein the entire software development life-cycle is broken down into smaller iterations (or parts). The project scope and requirements are clearly laid down, at the start of the development process. We adopt this model for large size projects as it helps to minimize the overall risk and lets the project adapt to changes quickly.
- Waterfall Methodology: Waterfall Development Model is best suited for projects where in the project requirements are static & would not change over the period of time during the software development life-cycle (SDLC). This development approach divides the overall project into sequential phases. Emphasis is on planning, time schedules, target dates, budgets and implementation of an entire system at one time.
- Extreme Programming: Extreme Programming (XP) is a software development methodology, which aims at improving software quality and responsiveness to changing customer requirements. As a type of agile software development, it attempts at having multiple short development cycles, rather than one long one which helps in reducing the cost of change or modification.
Saturday, August 18, 2012
REST In and Out
One man, Roy Fielding, did ask these questions in his doctoral thesis, “Architectural
Styles and the Design of Network-based Software Architectures.”
* In it, he identifies
specific architectural principles that answer the following questions:
• Why is the Web so prevalent and ubiquitous?
• What makes the Web scale?
• How can I apply the architecture of the Web to my own applications?
The set of these architectural principles is called REpresentational State Transfer (REST) :
Addressable resources
The key abstraction of information and data in REST is a resource, and each resource
must be addressable via a URI (Uniform Resource Identifier).
A uniform, constrained interface
Use a small set of well-defined methods to manipulate your resources.
Representation-oriented
You interact with services using representations of that service. A resource referenced
by one URI can have different formats. Different platforms need different
formats. For example, browsers need HTML, JavaScript needs JSON (JavaScript
Object Notation), and a Java application may need XML.
Communicate statelessly
Stateless applications are easier to scale.
Hypermedia As The Engine Of Application State (HATEOAS)
Let your data formats drive state transitions in your applications.
Friday, August 17, 2012
ADO.NET
ADO.NET is a set of class libraries that are part of the .NET Framework. The ADO.NET classes are generally divided into two types: connected classes and disconnected classes. The connected classes are those that are part of a namespace specific to a data source type. For example, the ADO.NET connected classes associated with SQL Server are part of the System.Data.SqlClient namespace. You use the connected classes to manage your connections to the SQL Server database and to access data in that database. The disconnected classes are part of the System.Data namespace and are independent from any data source. You use the disconnected classes to work with the data after it has been retrieved by the connected classes.
The disconnected classes never communicate directly with a data source. Figure 1 shows the more commonly used classes available in the System.Data.SqlClient and System.Data namespaces. The System.Data.SqlClient namespace includes the following connected classes specific to SQL Server:
- SqlConnection—Connects to the SQL Server .NET data provider in order to establish and manage the connection to the target database.
- SqlCommand—Contains the details necessary to issue a T-SQL command against a SQL Server database.
- SqlParameterCollection—Contains the collection of SqlParameter objects associated with a specific SqlCommand object. You access the collection through the SqlCommand object’s Parameters property.
- SqlParameter—Contains parameter-related information specific to a SqlCommand object.
- SqlDataReader—Provides efficient read-only access to the data retrieved through the SqlConnection and SqlCommand objects. The SqlDataReader is similar to a forward-only cursor.
- SqlDataAdapter—Provides a bridge between the connected classes and disconnected classes. This class includes the Fill and Update methods. Use the Fill method to populate a DataSet or DataTable object. Use the Update method to propagate updated data in a DataSet or DataTable object to the database.
The System.Data namespace includes the following disconnected classes:
- DataSet—Contains all the data retrieved through your connected objects. The DataSet object acts as a container for all DataTable objects and provides functionality that lets you work with the data in all the tables as single operations (such as saving data to a file).
- DataTableCollection—Contains the collection of DataTable objects associated with a specific DataSet object. You access the collection through the DataSet object’s Tables property.
- DataTable—Stores the data returned by your query. The data is stored in rows and columns, similar to how data is stored in a database table.
- DataColumnCollection—Contains the collection of DataColumn objects associated with a specific DataTable object. You access the collection through the DataTable object’s Columns property.
- DataColumn—Contains the metadata that describes the columns associated with a specific table. A DataColumn object doesn’t contain the stored data itself, only information about the column structure. The stored data is saved to DataRow objects.
- DataRowCollection—Contains the collection of DataRow objects associated with a specific DataTable object. You access the collection through the DataTable object’s Rows property.
- DataRow—Contains the actual data that is retrieved through your connected objects. Each DataRow object contains the data from one row of your query results.
In general, the disconnected objects act as an offline data cache for the data you retrieve through your connected objects. As a result, you can view and modify the data in a dataset without being connected to the data source.
http://www.sqlmag.com/article/scripting/accessing-sql-server-data-from-powershell-part-1
The Basics of ADO.NET
You will spend the most time working with five objects. They are the Connection object, the Command object, the DataReader object, the DataSet object, and the DataAdapter object.
ADO.NET provides you with predefined objects and methods. These objects and methods insulate you from the disparate data providers with which your application must interact.
ADO.NET implements four main data provider objects. They are Connection, Command, DataReader, and DataAdapter. Microsoft provides an implementation of these objects for SQL Server. They are called the SqlConnection, SqlCommand, SqlDataReader, and SqlDataAdapter objects. They ship as part of the .NET Common Language Runtime (CLR). The Connection object represents a single persistent connection to a data source. The Command object represents a string that ADO.NET can execute via a connection. The DataReader object provides you with a very efficient set of results that are based on a Command object. Finally, the DataAdapter object provides a link between the data provider objects and the DataSet object.
Working with a SQL Connection Object
The ADO.NET Connection object provides you with a persistent connection to data. After you have that connection, you can return rowsets and update data. When working with the Connection object, you can pass all or some of the following information to the provider:
- Server name
- Provider name
- UserID and password
- Default database
- Other provider-specific information
Wednesday, August 15, 2012
Procedures in SQL Server
Stored Procedures:
A stored procedure is a piece of programming code that can accept input parameters and can return one or more output parameters to the calling procedure or batch.
They can also return status information to the calling procedure to indicate whether they succeeded or failed.
SQL Server stored procedures provide excellent security for your database. You can grant rights to the stored procedure without granting rights to the underlying objects.
The SET NOCOUNT statement, when set to ON, eliminates the xxrow(s) affected message in the SQL Express Manager window. It also eliminates the DONE_IN_PROC communicated from SQL Server to the client application. For this reason the SET NOCOUNT ON statement, when included, improves the performance of the stored procedure.
A stored procedure is a piece of programming code that can accept input parameters and can return one or more output parameters to the calling procedure or batch.
They can also return status information to the calling procedure to indicate whether they succeeded or failed.
SQL Server stored procedures provide excellent security for your database. You can grant rights to the stored procedure without granting rights to the underlying objects.
The SET NOCOUNT statement, when set to ON, eliminates the xxrow(s) affected message in the SQL Express Manager window. It also eliminates the DONE_IN_PROC communicated from SQL Server to the client application. For this reason the SET NOCOUNT ON statement, when included, improves the performance of the stored procedure.
Using the @@ Functions
Developers often refer to the @@ functions as global variables. In fact, they don’t really behave like variables. You cannot assign values to them or work with them as you would work with normal variables. Instead they behave as functions that return various types of information about what is going on in SQL Server.
The @@TranCount function is applicable when you are using explicit transactions. Transactions are covered later in this chapter. The BEGIN TRAN statement sets the @@TranCount to one. Each ROLLBACK TRAN statement decrements @@TranCount by one. The COMMIT TRAN statement also decrements @@TranCount by one. When you use nested transactions @@TranCount helps you to keep track of how many transactions are still pending.
The @@Error function returns the number of any error that occurred in the statement immediately preceding it.
When you build a stored procedure, a query plan is created. This query plan contains the most efficient method of executing the stored procedure given available indexes and so on.
SQL Server 2005
System Versus User Objects
System databases include Master, Model, MSDB, Resource, TempDB, and Distribution. SQL Server creates these databases during the installation process.
In addition to system databases, there are also system tables, stored procedures, functions, and other system objects.
Whereas system objects are part of the SQL Server system, you create user objects. User objects include the databases, stored procedures, functions, and other database objects that you build.
Each column or set of columns in a table that contains unique values is considered a candidate key. One candidate key becomes the primary key. The remaining candidate keys become alternate keys. A primary key made up of one column is considered a simple key. A primary key comprising multiple columns is considered a composite key.
A domain is a pool of values from which columns are drawn. A simple example of a domain is the specific data range of employee hire dates. In the case of the Order table, the domain of the CustomerID column is the range of values for the CustomerID in the Customers table.
Normalization
Normalization is the process of applying a series of rules to ensure that your database achieves optimal structure. Normal forms are a progression of these rules. Each successive normal form achieves a better database design than the previous form did.
To achieve first normal form, all columns in a table must be atomic. This means, for example, that you cannot store the first name and last name in the same field. The reason for this rule is that data becomes very difficult to manipulate and retrieve if multiple values are stored in a single field.
Second Normal Form
To achieve second normal form, all nonkey columns must be fully dependent on the primary key. In other words, each table must store data about only one subject.
Third Normal Form
To attain third normal form, a table must meet all the requirements for first and second normal form, and all nonkey columns must be mutually independent. This means that you must eliminate any calculations, and you must break out data into lookup tables.
Although the developer’s goal is normalization, many times it makes sense to deviate from normal forms. We refer to this process as denormalization. The primary reason for applying denormalization is to enhance performance.
If you decide to denormalize, document your decision. Make sure that you make the necessary application adjustments to ensure that the system properly maintains denormalized fields. Finally, test to ensure that performance is actually improved by the denormalization process.
Integrity Rules
Although integrity rules are not part of normal forms, they are definitely part of the database design process. Integrity rules are broken into two categories. They include overall integrity rules and database-specific integrity rules.
Overall Rules
The two types of overall integrity rules are referential integrity rules and entity integrity rules. Referential integrity rules dictate that a database does not contain any orphan foreign key values. This means that
- A primary key value cannot be modified if the value is used as a foreign key in a child table. This means that a CustomerID cannot be changed if the orders table contains rows with that CustomerID
- A parent row cannot be deleted if child rows are found with that foreign key value. For example, a customer cannot be deleted if the customer has orders in the orders table
SQL Server has two wonderful features. One is called Cascade Update, and the other is called Cascade Delete. These features make it easier for you to work with data, while ensuring that referential integrity is maintained. With the Cascade Update feature, SQL Server Express automatically updates the foreign key field on the child rows when the primary key of the parent is modified. This allows the system to modify a primary key while maintaining referential integrity. Likewise, the Cascade Delete feature deletes the associated child rows when the parent rows are deleted, once again maintaining referential integrity.
Entity integrity dictates that the primary key value cannot be null. This rule applies not only to single-column primary keys, but also to multicolumn primary keys. In fact, in a multicolumn primary key, no field in the primary key can be null. This makes sense because if any part of the primary key can be null, the primary key can no longer act as a unique identifier for the row. Fortunately, SQL Server does not allow a field in a primary key to be null.
Database-Specific Rules
The other set of rules applied to a database are not applicable to all databases, but, instead, are dictated by business rules that apply to a specific application. Database-specific rules are as important as overall integrity rules. They ensure that the user enters only valid data into a database. An example of a database-specific integrity rule is that the delivery date for an order must fall after the order date.
A view is a virtual table. Its contents are based on a query. Like a table, a view is composed of rows and columns. Except in the case of a special type of view called an indexed view, views exist only in memory.
A stored procedure is a piece of programming code that can accept input parameters and can return one or more output parameters to the calling procedure or batch (see Figure 1.9). Stored procedures generally perform operations on the database, including the process of calling other stored procedures. They can also return status information to the calling procedure to indicate whether they succeeded or failed.
Tuesday, August 14, 2012
IIS Authentication
http://msdn.microsoft.com/en-us/library/aa292118%28v=vs.71%29.aspx
An important part of many distributed applications is the ability to identify someone, known as a principal or client, and to control the client's access to resources. Authentication is the act of validating a client's identity. Generally, clients must present some form of evidence, known as credentials, proving who they are for authentication.
IIS provides a variety of authentication schemes:
Regardless of which method you choose, after IIS authenticates the client it will pass a security token to ASP.NET. If you configure ASP.NET authentication to use Windows authentication and you enable impersonation, ASP.NET will impersonate the user represented by this security token.
Anonymous authentication gives users access to the public areas of your Web site without prompting them for a user name or password. Although listed as an authentication scheme, it is not technically performing any client authentication because the client is not required to supply any credentials. Instead, IIS provides stored credentials to Windows using a special user account, IUSR_machinename. By default, IIS controls the password for this account. Whether or not IIS controls the password affects the permissions the anonymous user has. When IIS controls the password, a subauthentication DLL (iissuba.dll) authenticates the user using a network logon. The function of this DLL is to validate the password supplied by IIS and to inform Windows that the password is valid, thereby authenticating the client. However, it does not actually provide a password to Windows. When IIS does not control the password, IIS calls the LogonUser() API in Windows and provides the account name, password and domain name to log on the user using a local logon. After the logon, IIS caches the security token and impersonates the account. A local logon makes it possible for the anonymous user to access network resources, whereas a network logon does not.
An important part of many distributed applications is the ability to identify someone, known as a principal or client, and to control the client's access to resources. Authentication is the act of validating a client's identity. Generally, clients must present some form of evidence, known as credentials, proving who they are for authentication.
IIS provides a variety of authentication schemes:
- Anonymous (enabled by default)
- Basic
- Digest
- Integrated Windows authentication (enabled by default)
- Client Certificate Mapping
Regardless of which method you choose, after IIS authenticates the client it will pass a security token to ASP.NET. If you configure ASP.NET authentication to use Windows authentication and you enable impersonation, ASP.NET will impersonate the user represented by this security token.
Anonymous
Anonymous authentication gives users access to the public areas of your Web site without prompting them for a user name or password. Although listed as an authentication scheme, it is not technically performing any client authentication because the client is not required to supply any credentials. Instead, IIS provides stored credentials to Windows using a special user account, IUSR_machinename. By default, IIS controls the password for this account. Whether or not IIS controls the password affects the permissions the anonymous user has. When IIS controls the password, a subauthentication DLL (iissuba.dll) authenticates the user using a network logon. The function of this DLL is to validate the password supplied by IIS and to inform Windows that the password is valid, thereby authenticating the client. However, it does not actually provide a password to Windows. When IIS does not control the password, IIS calls the LogonUser() API in Windows and provides the account name, password and domain name to log on the user using a local logon. After the logon, IIS caches the security token and impersonates the account. A local logon makes it possible for the anonymous user to access network resources, whereas a network logon does not.
EWS Managed API
The Microsoft Exchange Web Services (EWS) Managed API 1.1 provides an
intuitive managed API for developing client and server applications that
leverage Exchange 2010 data and business logic, whether Exchange is
running on premise or in the cloud. The EWS Managed API 1.1 makes
Exchange Web Services SOAP calls under the covers, so many environments
are already configured for EWS Managed API 1.1.
System Requirements
You must have the following items to complete this lab:
Architecture
Exchange Web Services is deployed with the Client Access server role. Microsoft Exchange Server 2010 clients connect to the computer that is running Exchange 2010 that has the Client Access server role installed in an Active Directory directory service site by using an HTTPS connection. If the target mailbox is in another Active Directory site, the source Client Access server creates an HTTPS connection to the target Client Access server. The target Client Access server obtains the information by communicating over MAPI to the Exchange server that has the Mailbox server role installed and then sends it back to the source Client Access server. If the target mailbox is in the same Active Directory site, the Client Access server uses MAPI to communicate with the Mailbox server to obtain the information. The Client Access server then provides the data back to the client.
System Requirements
You must have the following items to complete this lab:
- Microsoft Visual Studio 2010
- Exchange Web Services Managed API 1.1 SDK
- Two accounts configured to have Microsoft Exchange mailboxes, referred to as the primary and secondary lab users in this lab.
To use the EWS Managed API, you need to have the following:
-
The EWS Managed API, which you can download from the Microsoft Download Center. The EWS Managed API works with all versions of Exchange starting with Exchange 2007 SP1.
-
A mailbox on an Exchange server that is running Exchange 2007 SP1
or a later version, or Exchange Online Preview. You must have the user
name and credentials of the account. By default, direct EWS access is
enabled for all Exchange Online Preview plans except for the Kiosk plan.
-
The .NET Framework version 3.5 or later. Versions of the EWS
Managed API starting with the EWS Managed API 2.0 Beta 2 require the
.NET Framework 4.
- Familiarity with web services and managed programming.
Architecture
Exchange Web Services is deployed with the Client Access server role. Microsoft Exchange Server 2010 clients connect to the computer that is running Exchange 2010 that has the Client Access server role installed in an Active Directory directory service site by using an HTTPS connection. If the target mailbox is in another Active Directory site, the source Client Access server creates an HTTPS connection to the target Client Access server. The target Client Access server obtains the information by communicating over MAPI to the Exchange server that has the Mailbox server role installed and then sends it back to the source Client Access server. If the target mailbox is in the same Active Directory site, the Client Access server uses MAPI to communicate with the Mailbox server to obtain the information. The Client Access server then provides the data back to the client.
Sunday, August 12, 2012
Windows Power Shell Jump Start
Windows PowerShell® is a task-based command-line shell and scripting
language designed especially for system administration. Built on the
.NET Framework, Windows PowerShell helps IT professionals and power
users control and automate the administration of the Windows operating
system and applications that run on Windows.
Windows PowerShell is built on top of the .NET Framework common language
runtime (CLR) and the .NET Framework, and accepts and returns .NET
Framework objects. This fundamental change in the environment brings
entirely new tools and methods to the management and configuration of
Windows.
Windows PowerShell is very different.
- Windows PowerShell does not process text. Instead, it processes objects based on the .NET Framework platform.
- Windows PowerShell comes with a large set of built-in commands with a consistent interface.
- All shell commands use the same command parser, instead of different parsers for each tool. This makes it much easier to learn how to use each command.
Best of all, you do not have
to give up the tools that you have become accustomed to using. You can
still use the traditional Windows tools, such as Net, SC, and Reg.exe in
Windows PowerShell.
A cmdlet (pronounced "command-let") is a single-feature command that
manipulates objects in Windows PowerShell. You can recognize cmdlets by
their name format -- a verb and noun separated by a dash (-), such as
Get-Help, Get-Process, and Start-Service.
Windows PowerShell provides a new architecture that is based on objects, rather than text. The cmdlet that receives an object can act directly on its properties and methods without any conversion or manipulation. Users can refer to properties and methods of the object by name, rather than calculating the position of the data in the output.
Windows PowerShell provides a complete interactive environment. When you type a command or expression at the Windows PowerShell command prompt, the command or expression is processed immediately and the output is returned to the prompt.
This is true for all command types, including cmdlets, aliases, functions, CIM commands, workflows, and executable files.
You can also send the output of a command to a file or printer, or you can use the pipeline operator (|) to send the output to another command.
What is a script?
A script is text file
that contains one or more Windows PowerShell commands or expressions.
When you run the script, the commands and expressions in the script file
run, just as if you typed them at the command line.
Typically, you write a script to save command sequence that you use frequently or to share a command sequence with others.
http://www.sqlmag.com/article/windows-powershell/powershell-scripting
http://technet.microsoft.com/en-us/library/ff730939.aspx
Typically, you write a script to save command sequence that you use frequently or to share a command sequence with others.
http://www.sqlmag.com/article/windows-powershell/powershell-scripting
http://technet.microsoft.com/en-us/library/ff730939.aspx
Thursday, August 9, 2012
This Keyword
This Keyword is used To inform the compiler that you wish to
name data field to the incoming name parameter, simply use this to resolve
set the current object’s
the ambiguity:
public void SetDriverName(string name)
{ this.name = name; }
Chaining Constructor Calls Using this
Another use of the
this keyword is to design a class using a technique termed constructor chaining.
This design pattern is helpful when you have a class that defines multiple constructors. Given the
fact that constructors often validate the incoming arguments to enforce various business rules, it
can be quite common to find redundant validation logic within a class’s constructor set.
A cleaner approach is to designate the constructor that takes the
greatest number of arguments
as the “master constructor” and have its implementation perform the required validation logic. The
remaining constructors can make use of the
this keyword to forward the incoming arguments to
the master constructor and provide any additional parameters as necessary. In this way, we only
need to worry about maintaining a single constructor for the entire class, while the remaining constructors
are basically empty.
Here is the final iteration of the
Motorcycle class (with one additional constructor for the sake
of illustration). When chaining constructors, note how the
this keyword is “dangling” off the constructor’s
declaration (via a colon operator) outside the scope of the constructor itself:
class Motorcycle
{
public int driverIntensity;
public string driverName;
// Constructor chaining.
public Motorcycle() {}
public Motorcycle(int intensity)
: this(intensity, "") {}
public Motorcycle(string name)
: this(0, name) {}
// This is the 'master' constructor that does all the real work.
public Motorcycle(int intensity, string name)
{
if (intensity > 10)
{
intensity = 10;
}
driverIntensity = intensity;
driverName = name;
}
...
}
Observing Constructor Flow
On a final note, do know that once a constructor passes arguments to the designated master constructor
(and that constructor has processed the data), the constructor invoked originally by the
caller will finish executing any remaining code statements.
Constructors
C# supports the use of
class constructors
established at the time of creation. A constructor is a special method of a class that is called indirectly
when creating an object using the
new keyword. However, unlike a “normal” method,
constructors never have a return value (not even
void) and are always named identically to theclass they are constructing.
Every C# class is provided with a freebee
default constructor that you may redefine if need be. By
definition, a default constructor never takes arguments. Beyond allocating the new object into
memory, the default constructor ensures that all field data is set to an appropriate default value
Defining Custom Constructors
Typically, classes define additional constructors beyond the default. In doing so, you provide the
object user with a simple and consistent way to initialize the state of an object directly at the time
of creation
However, as soon as you define a custom constructor, the default constructor is
silently removed
from the class and is no longer available! Think of it this way: if you do not define a custom constructor,
the C# compiler grants you a default in order to allow the object user to allocate an
instance of your type with field data set to the correct default values. However, when you define
a unique constructor, the compiler assumes you have taken matters into your own hands.
Subscribe to:
Posts (Atom)