# Thursday, December 13, 2007

The reason I published this post is because I didn't find any result when googling the error description and solved it myself at last.

If you're asp.net developers you probably know the ObjectDataSource object, which represents a business object that provides data to data-bound controls in multi-tier Web application architectures.

I like this object, most of the time this object can solve you all the annoying steps logic of calling the BL/DAL object in order to retrieve the data and populate the wanted presentation control.

On one of my working on website's pages (which is quite complex one that knows to list data from several sources and procedures), I am using such of object as a data source in the main GridView that renders a list of records. In order to interact with each different select method I had to set every time the SelectMethod property and it's specific parameters in the code-behind. Until now everything is just fine...

It seems that working this way affects the other postback events on the page, (because after event postback, the OnInit and OnLoad events are being called first and just after it the event handler itself is being called), here my page was crashed and gave this error message: "The Select operation is not supported by ObjectDataSource '<objectdatasource_id>' unless the SelectMethod is specified."

This error caused because the page expected the SelectMethod property to be initialized between the OnInit and OnLoad methods and just after it the rest of the events.

The resolution is quite easy in this case;

First, you need to remove the ObjectDataSourceID property definition from the control properties' definitions layout in the control source and set the DataSource for the control to the desired one in the OnInit method. After it, in the OnPreRender method call the control's DataBind method in order to bind the data source. This last action will allow to any event to happen and just after it to set up the Control (the GridView in my case) with data.

protected override void OnInit(EventArgs e)
{
   base.OnInit(e);
   MyGridView.DataSource = MyObjectDataSource;
}

protected override void OnPreRender(EventArgs e)
{
   base.OnPreRender(e);
   MyGridView.DataBind();
}

I hope it'll help anyone...

.NET 2005 | ASP.NET | Bugs | C#
Thursday, December 13, 2007 1:02:19 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Monday, September 24, 2007

As I posted in the last post, I didn't have much time to update my blog last month (even most of this month), so I hope I could catch up these days and post some more about the on going issues that comes up.

Last month, a gut named Roni Schuetz send me an email regarding my post about Maintaining Data over Multi-Servers (Load Balancing on Web Farm) (direct link here).
Roni is the creator of a project named, Shared Cache which supplies high-performance, distributed memory object caching system, generic in nature, but intended to speeding up dynamic web and / or win applications by alleviating database load. He suggested me to use his project regarding maintaining cached data between multiple servers and I acceded testing it.

By Roni's documentation and project usage explanations the project is friendly usable and for a free-to-use-software I think it is highly recomended using it (or at least testing it).

Unfortunatly (or not), my company (IDT Global) has purchased (an expensive and also a great one) tool called ScaleOut SessionState In order to maintain session data over multiplae servers.

So, if you have an answer regarding this issue or just want to read about it, you can try Roni's indeXus.Net Shared Cache here.

Monday, September 24, 2007 10:15:21 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Tuesday, July 10, 2007

I am working against 3rd level party assembly in my current web application. I need to send US address information to this assembly and to retrieve an answer whether this address is exist or not. This assembly requires validation against X.509 certificate (to ensure that only permited client could use the 3rd level's services), which is installed on the server that runs the application (in dev environment this is my local PC).
More details about it here.

The problem: In order to authenticate against this certificate, the process that runs the application need to 'hold' sufficient credentials in order to get an access to the certificate and to do the authentication. Here comes our problem; when trying to access this certificate through the asp.net application, we run into a problem - It's impossible, because the process that runs the web application is ASPNET and doesn't has the needed credentials in order to authenticate the certificate and get the info from the 3rd level.

Suggested solutions:

  1. Credentials. Read the credentials from the web.config (username, password and domain) and impersonate the user using these credentials. This will 'save' the impersonated user all over the impersonation context (System.Security.Principal.WindowsImpersonationContext) and the authenicate action against the certificate will be done using this credentials. One more important thing, to ensure this data protected, encrypt it before puting it into the web.config.
  2. I thought about IIS Application Pool. This is a great feature that came up in IIS 6.0, which enables you the ability of creating one or more applications and allows us to configure a level of isolation between different Web applications. You can set the identity of an application pool which will be the account under which the application pool's worker process runs. So I thought to set it over there, but I had one big problem, an IIS 5 was installed on the production server and it is not a dedicated server. (More details about application pool here).
  3. Host .NET component in COM+. This is the third solution and the best for me at the current circumstances; Because I am working with a several applications (assemblies) I want to host the component that validates the user against the 3rd level party, this will give me a unified behavoir for all the applications while doing this action (Instead of setting these properties in web.config file of each web application we want to use {solution 1, remember?}). In other words, I'll set the username and password on the COM+ component just once in order to grant the process that runs this component the right and sufficient credentials. .NET provides a way to host your .NET components inside COM+ environment. All the functionality you need to write a COM+ aware component in .NET can be found in System.EnterpriseServices namespace.

So how we do it (hosting .NET assembly in COM+)?

Take a look on this code:

using System;
using System.Collections.Generic;
using System.Text;
using System.EnterpriseServices;
using System.IO;
using System.Reflection;
using System.Runtime.InteropServices;

namespace ComPlusTest
{
    [Transaction(TransactionOption.Required),
        ObjectPooling(MinPoolSize=2, MaxPoolSize=5, CreationTimeout=20000),
        ComVisible(true)]
    public class TestClass : ServicedComponent
    {
        protected override void Activate()
        {
            base.Activate();
            DoSomeAction(Action activate)
        }

        protected override void Deactivate()
        {
            base.Deactivate();
            DoSomeAction(Action deactivate)
        }

        protected override bool CanBePooled()
        {
            DoSomeAction(Action pooled)
            return base.CanBePooled();
        }

        public void ValidateAddress(string address)
        {
            try
            {
               // Do the validation against the 3rd party
               ContextUtil.SetComplete();
            }
            catch(Exception ex)
            {
               // Handle exception
               ContextUtil.SetAbort();
            }
        }

        [AutoComplete()]
        public void JustAction()
        {
            DoSomeAction(Action simpleAction);
        }

        private void DoSomeAction(Action act)
        {
            // Do the action
        }
    }
}

Lets dissect it:

  1. Firstable you can see that the class is derived from ServicesComponent (which sits in the System.EnterpriseServices namespace). I marked our TestClass with some attributes. The first one in Transaction; The values for this attribute are same as in traditional VB/VC++ development i.e. Required, RequiresNew, Supported etc. MinPoolSize and MaxPoolSize specifies values for minimum and maximum object instances. The ComVisible attribute must be set to true to give the accessibility of an individual managed type or member, or of all types within an assembly, to COM (I spent lots of time trying to figure out some exceptions that I had while overriding the ServicesComponent class).
  2. the class is marked to require a transaction each method will execute in a transaction (existing or new). Once the ValidateAddress has been executed we need to either commit or rollback the transaction. This is done via static methods of ContextUtil class. The method SetComplete is used to commit a transaction where as SetAbort is used to rollback a transaction.
  3. Just for example, I defined a methid called JustAction. This method is marked with an attribute AutoComplete which means that once the method execution is over the transaction is automatically committed (equivalent to ContextUtil.SetComplete). In case of any error the transaction will be rolled back (equivalent to ContextUtil.SetAbort).
  4. Overrided Activate, Deactivate and CanBePooled methods are just for testing (in order to observe the flow behavior).

Now, you have to sign your assembly with a strong name and to add the following attributes to the AssemblyInfo class of your project:

[assembly: ApplicationName("ComPlusTest")]
[assembly: ApplicationActivation(ActivationOption.Library)]
[assembly: AssemblyKeyFileAttribute("ComPlusKey.pfx")]

Tuesday, July 10, 2007 4:49:24 PM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Wednesday, July 4, 2007

I got an email the other day from Omer Rauchwerger, the developer and the man who stands behind a nice tool (addin that runs over the Visual Studio 2005) called Regionerate. I had requested to 'play' with this tool and give an opinion about it, and so I did...

(Note: I heard about it earlier than Omer's email, Ken Egozi's had posted about it on his blog).

Before I outlines my impressions, comments and feelings about it, I want to give some words about the Regionerate website itself; The man did here realy good job. There are great detailed demo movie that displays the work of the tool, some tutorials, gallery and more...

About the tool, well I'd downloaded the latest version (beta) on my PC and played with it a little bit. The usage is very convinient and indeed saves time while reagioning your code, it all being done by a single right click and gives nice and elegant outcome.

I very impressed from the custom Code Layout of this tool; By Using this tool you can customize your final layout of code by simple XML file editing (fully intellisense adapted).

On the  end of the day, a great work has been done here and I am looking for more innovetions in the next versions and for the final release of course.

Omer, if you'll find a way to give the ability of titling each specific region in addition to the current titles (before regioning of course) it will be great one.

Download is here.

Wednesday, July 4, 2007 8:44:49 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Sunday, July 1, 2007

I had a performance problem in my current working on web application; In one of my flows in this application, I had needed to call the database and to update some large amount of data over there, but this action had taken lots of time and the outcome was that users had to wait a long time until this action will be done, admit it, it is frustrating...

My first kind of solution to this problem was to create a new thred from the IIS's thread pool and to assign this action under it - quite good resolution not? BUT, I reminded that asp.net 2.0 (also 1.X) already implements it in a better and friendly way, using Asynchronous Pages.

But first, some Background...
As we all know (or not), when ASP.NET receives a request from the user, it ask for a thread from a thread pool and assigns that request to the thread. In order to this action, the synchronous page holds this thread for the duration of the request, and preventing it from being used by other requests. That leads us to my problem: when I am calling to the database and doing the long long action (an UPDATE query), the thread assigned to the request is stuck doing nothing until the call returns. (This happens because the thread pool has a finite number of threads available).

The Resolution is (of course) Asynchronous Pages.

Asynchronous pages offers a neat solution to such kind of problems. Once an asynchronous operation begins in response to a signal from ASP.NET, the page returns the used thread to the thread pool. When this operation completes, this mechanism asks for another thread from the thread pool and finishes processing the request. This mechanism helps us to manage more efficiently the threads manipulation from the thread pool, because threads that were stucked earlier, now can be used for other porpuses.

Lets see some code:

Firstable, you need to set the Async property on the top on the asp.net page in order to use this thing:

<%@Page Language="C#" Async="true" ... %>

This property set to true, says the page to implement the IHttpAsyncHandler. Regarding this, you need to register the Begin method and End method of to the Page.AddOnPreRenderCompleteAsync.

// Register async methods
AddOnPreRenderCompleteAsync(
   new BeginEventHandler(BeginAsyncOperation),
   new EndEventHandler(EndAsyncOperation)
);

By these actions, the starts its normal life cycle, until the end of the OnPreRender event invocation. At this point the ASP.NET calls the Begin method that we registered earlier and the operation begins (calling the database etc...), meanwhile, the thread that has been assigned to the request goeas back to the thread pool. At the end of the Begin method, an IAsyncResult is being sent automatically to the ASP.NET and let it determine in the operation had completed, a new thread is being called from the thread pool and there is call to the End method (that we registered earlier, remmember?).

Note: We do not need to implement the IAsyncResult interface, the Framework implements it for us.

The Begin and End Methods:

IAsyncResult BeginAsyncOperation (object sender, EventArgs e, AsyncCallback cb, object state)
{
   // Do your things...
   // Call the DB and run the long long query...
}

void EndAsyncOperation(IAsyncResult ar)
{
   // Do your things...
   // Get a response from the DB that the operation is DONE...
}

Nide ahhhu? So use it wisely...

Sunday, July 1, 2007 11:26:35 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Sunday, June 10, 2007

These days I am working on a very big web application...

In one of my aspx pages I had needed to save lots of data in the ViewState object in order to persist data between postbacks, but when I looked at the rendered HTML, I saw a large hidden field for carring the ViewState.

ASP.NET 2.0 came up with a new feature that helps to reduce the amount of the hidden filed's ViewState data that called: PageStatePersister.

When we add an override the PageStatePersister property and use the built-in SessionPageStatePersister, the behavior of the page remains the same, but the storage used for the bulk of the state data is shifted from the hidden field to session state.

Implamantation instance:

protected override PageStatePersister PageStatePersister
{
   get { return new SessionPageStatePersister(this); }
}

In several cases you'll only want to override this property in your page and to shift the ViewState data into the Sesson object, but if you'll want to use it (wisely of course) on your entire web application? You should implement this property in a particular custom base page and to inherit it to all of your application pages.

The only disadventage that I could think about here is the data existent, session can lose its data and information if its timeout has ended, but ViewState can hold the data forever on the page, because it's hard coded.

Sunday, June 10, 2007 3:08:17 PM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Thursday, May 31, 2007

I am working now on a large web application, that need to be used by more than one websites (at least 5 of them, websites and web services), therefore I have needed to do some isolating here with my main core projects.

Some background...
I have a common assembly (web application) that holds only the user controls, server controls and custom controls, which need to serve the all other web applications that are using them. This assembly has a reference to the other web application in order to have some information about some properties, session variables and global members, by this information, it knows to gereate some actions on runtime (or even in design). BUT, in the other hand, this web application need to use the controls that the first assembly has published, here we have a problem, we got a circular references, which is now allowed in .NET framework, also it isn't allowed anywhere I think...

So, how we gonna solve this problem?

The solution is quite simple and is known as Seperate Interface Pattern. (Click here to get some more info).

The main steps to implement it are:
Let define a project that called ProjectA and holds the user controls (etc...) implementations along with InterfaceB. ProjectA would maintain a reference to InterfaceB, which will hold any properties such as members, methods, events etc...

Now, lets define ProjectB which will implement InterfaceA. Now, ProjectB would reference ProjectA and (BUT) ProjectA would not reference ProjectB of course.

The result, ProjectA can access to ProjectB's specific exposed members and ProjectB can use the controls of ProjectA.

.NET 2005 | Bugs | Code | Patterns
Thursday, May 31, 2007 2:26:25 PM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Thursday, May 17, 2007

Hi fellows, how are you?

I read a nice article regarding editing and encrypting/decrypting web.config sections. The nicest thing in that feature is the ability to access to the web.config content via the actual code behind (and) in run-time. (Could be a lot of reasons to access the file from the code itself, and the API is very 'friendly').

Click here to get the directive to this article.

Bye bye...

Thursday, May 17, 2007 10:23:55 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Sunday, April 29, 2007

Hi again..

I am here again with the same issue, and this is because a long conversation that I had with Oren Ellenbogen (ex. co-worker) about some extending and refactoring of the former post solution (you can see it here if you missed it).

The main goal in the Session/Application objects encapsulation was the ability of avoiding casting each time that we would use these objects, this is annoying especially we uses the specific object in most of the flows of the application.
The other goal is getting the ability of managing these objects in one centered place.

NOW, some extesibility...
This object need to be maintened everytime that we want to add a new session/application object. Good usage of generics will solve this problem -> this will bring up the ability of adding new objects everywhere that we'll want (example in the continuance...).

So, look at the following implemetation:

public static class SessionRepository
{
   public static bool IsExist(string objectKey)
   {
      return HttpContext.Current.Session[objectKey] != null;
   }

   public static TObject GetInstance<TObject>(string objectKey)
   {
      return (TObject)HttpContext.Current.Session[objectKey];
   }

   public static void Add<TObject>(string objectKey, TObject obj)
   {
      HttpContext.Current.Session.Add(objectKey, obj);
   }
}

Some usage:

if (SessionRepository.IsExist("SomeObjectKey"))
{
   SomeObject obj = SessionRepository.GetInstance<SomeObject>("SomeObjectKey");

   // Do your things...
}

SessionRepository.Add<SomeObject>("SomeObjectKey", SomeObject);

This way of implementation comes to help us with the casting issue and it gives up extensibilty options. I think that there is a small disadventage here - we also need to remeber the keys of the objects in the session object - but there is nothing perfect.

Summary:

  1. Both of the solutions are good and each has each advantages/disadventages, you can prefer the best way of using.
  2. The first way (shown in the former post) enables you a direct access to the object stays in the session/application, but need to be managed for each time we want to add new object into the session/application.
  3. The way shown here holds a different approach, enables you extensibility, but you don't have the explicit access to these objects.
  4. In both ways, the casting issue is covered!

That's it for today.

Commets will be appriciated...

Sunday, April 29, 2007 2:41:05 PM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Thursday, April 19, 2007

Hi!

In most of ourweb applications, we (must) use the session object, which gives us better way of data storing (session object lives over the HTTP protocol and exists all over the user's session lives {except of expiration etc...}).

The access to the session objects and variables is quite easy and simple, BUT, what happens when you want to store your complex struct or object in the session (even some other system object)? THEN, you must cast this session variable, and check if it alives before you can access its properties etc...

I have a good suggestion that also will encapsulate the sesison's variables and will be easy to manage, pay attention:

Firstable, I created a static class, called: Repository, which will expose the session variables as properties, and the access to these objects will be much more easy and explicit.

The repository static class:

public static class Repository
{
    public static SomeObject SessionSomeObject
    {
        get
        {
           return HttpContext.Current.Session["SomeObject"] as SomeObject;
        }
        set
        {
            HttpContext.Current.Session["SomeObject"] = value;
        }
    }

    // Some more properties declarations
}

(This class gathers all the session/application members = good and convenient code management).

NOW, look at the 'old fashioned' and regular way that the sytax suggests us (if we don't use the Repository static class):

if (Session["SomeObject"] != null)
{
   myObject = ((SomeObject)Session["SomeObject"]).MyProperty;
}
else
{
   // bla bla bla...
}

In the above example, we must check if the object is alive in the session firstly if we want to access its properties (unless we do it, it will throw us a runtime error). In the bottom example we cover this case with one sentense of code:

myObject = Repository.SessionSomeObject.MyProperty;

Here, even if the object is null, it will we create an instance of it and will return us some default value of the object's property.

Have a good day...

p.s.
This code relates also to the Application object!

Thursday, April 19, 2007 12:19:24 PM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Wednesday, April 11, 2007

You can download it, it is ready, right from the oven...

The best thing that I found there is the Validation Application Blocks, which is new and wasn't in the earlier versions.
"Developers can use this application block to create validation rules for business objects that can be used across different layers of their applications." (quoted form the msdn site).

You can find it here: http://msdn2.microsoft.com/en-us/library/aa480453.aspx

Enjoy...

Wednesday, April 11, 2007 10:40:47 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Monday, March 19, 2007

Hello guys, how are you?

In this post I wanted to talk about some code managing. As we all know (the Microsoft environment developers of course...) the 'natural' way, that Microsoft pushes into, that exists (if you are working with Visual Studio) is to work against Visual Source Safe. In all my (3 years) career progress I was working against this tool, which was not so bad, but in fact, it has some disadvantages and problems (do not mention the incidents of code losting and lack of doing merging between code files).

So, let's talk about the SubVersion control system and give some overview about this open source and free product.
'The goal of the Subversion project is to build a version control system that is a compelling replacement for CVS in the open source community. The software is released under an Apache/BSD-style open source license.' (taken from the SubVersion site).

OK, some basics facts:

  1. FREE FREE FREE - As they claims (in the above paragraph) this product is free to use, so you don't need to buy this kind of a tool (if you are lack of money... :))
  2. Directories, renames, and file meta-data are versioned. Lack of these features is one of the most common complaints against each source control. Subversion versions not only file contents and file existence, but also directories, copies, and renames. It also allows arbitrary metadata ("properties") to be versioned along with any file or directory, and provides a mechanism for versioning the `execute' permission flag on files.
  3. Apache network server option, with WebDAV/DeltaV protocol. Subversion can use the HTTP-based WebDAV/DeltaV protocol for network communications, and the Apache web server to provide repository-side network service. This gives Subversion an advantage over CVS in interoperability, and provides various key features for free: authentication, wire compression, and basic repository browsing.

and more...

I found this tool best to use now, because in the existing project that I am working on, the work is also overseas (I am working commonly with developers in the US), and the only tool that provides that is the SubVersion.
Another good thing is that I can get as much as I want a file to work on, and even if another developer is working on it in the same time, at the end of the work SubVersion will know to merge the code in a smart way, which Visual Source Safe run into problems occasionally.

So, if you want to explore some more details about it and download it you can go to the SubVersion site at: http://subversion.tigris.org/

Comments will be appriciated.

Monday, March 19, 2007 10:57:22 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Thursday, February 15, 2007

Hello!

Long time no seen, yes I know...
It's because I am quite busy at work, we worked at full time job on a very large web project to one of the government offices, but this project is coming to its end.

Now, to our issue...
Firstable some words about the application architecture - the application is devided and built as N-Tier layers, every tier is isolated from the other and lives as a single and separate assembly (.dll).
The tiers are:

  • Entities Layer -  This layer holds and represents the entities of the application, for each database table there is an entity class which holds all its fields as properties by each field specification. This class is a Typed Dataset, that holds all the data and being generated automatically, in addition there is another class that represents a filter in purpose to hold values to filter if necessary.
  • Data Access Layer - Every entity class has a DAL class which implements the main CRUD (create, read, update and delete) methods against the database. For easyer and comfortable working, we are using the SqlHelper of Data Application Blocks v.2.0.
  • Business Logic Layer - This layer holds business logic classes that holds the flows of more comlpexed actions, like transactions, and a working with several tables.
  • Presentation Layer - This layer holds the presentation web pages. All the pages are AJAX fully supported to grant the user the best surfing experience.

OK, after I told you about the architecture I will approach the problem I bumped into.

When I wanted to fill my Typed DataSet using SqlHelper I thought to use the classic method:

UsersDS ds = SqlHelper.ExecuteDataset(con, CommandType.StoredProcedure, StoredProcedures.GetUserById, idParam);

But I encountered with a problem to fill the typed DataSet - UsersDS, this method knows to return a generic DataSet with no specification of the Typed DataSet and it was a problem (but little one... :))

The new change of the Data Application Blocks v.2.0 is that there is the ability of using FillDataSet method which knows to fill the exact Typed DataSet and DataTables that exists in it, and this is going like that:

SqlHelper.FillDataset(con, StoredProcedures.GetUserById, ds, new string[] { "E_Users" }, idParam);

Here, you must specify the Typed DataTable that exist in the Typed DataSet that you want to fill. As you see, we must send it via the method as a string's array, and by that you can send several tables to fill by sending their names.

That's it folks, as usually I will be glad to hear some additions and comments.

Thursday, February 15, 2007 10:48:22 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Thursday, December 28, 2006

Some of the impovments with that service pack are:

1. Refactoring performance in ASP.NET WebSites projects like:
    Before determining if an .aspx page should be loaded, the refactoring operation will:

      Perform a lexical search on the element that is being refactored to determine if it exists in an .aspx page.
      •

Determine if a reference is accessible from the current scope.

2. Web Site Projects and Web Application Projects general issues:
    The Web Applications project system does not detect missing web.config files. Adding a control that requires configuration information will cause a false folder to appear in Solution Explorer. The workaround is to add a web.config file manually before you add any controls to a Web Application project.

   Web Application projects that contain subprojects that reference controls in the root project may hang the IDE.

   If a Web site solution that contains .pdb and .xml files is added to TFS source control, the .pdb files and .xml files may not be added correctly.

   Visual Studio will leak memory when you operate a Wizard inside a View inside a Multiview. The workaround is to save the solution and then restart Visual Studio.

   Changes to the bin folder in Web site and Web Application projects can cause Visual Studio to create a shadow copy of the entire bin folder. This copying can slow the performance of Visual Studio and consume disk space.

   If your page and user controls exist in different namespaces that are under the same root namespace, the generated code will not compile because the namespace that the designer creates for the declaration of the user control inside the page is wrong. The workaround is to delete the declaration from the designer file and then put it in the code-behind file. Once it is moved to the code-behind file, it will remain there unaltered even if you change the page.

You can download it by pressing this link: http://www.microsoft.com/downloads/details.aspx?familyid=BB4A75AB-E2D4-4C96-B39D-37BAF6B5B1DC&displaylang=en

Thursday, December 28, 2006 12:29:42 PM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Thursday, November 23, 2006

Hey guys how are you?

After long conversation with my work colleague, I thought that I need to sharpen the evidences about Application Domains - aka AppDomain.

By .NET environment, the concept of an application domain, or AppDomain known as a process. The AppDomain is both a container and a boundary. The .NET runtime uses an AppDomain as a container for code and data, just like the operating system uses a process as a container for code and data. As the operating system uses a process to isolate misbehaving code, the .NET runtime uses an AppDomain to isolate code inside of a secure boundary.

An AppDomain belongs to only a single process, but single process can hold multiple AppDomains. An AppDomain is relatively cheap to create (compared to a process), and has relatively less overhead to maintain than a process. For these reasons, an AppDomain is a great solution for the ISP who is hosting hundreds of applications. Each application can exist inside an isolated AppDomain, and many of these AppDomains can exist inside of a single process – a cost savings.

Lets take an example from the REAL life:
Assume that you had created 2 ASP.NET aplpications in the same server, what will happen intior the system?

Firstable, the ASP.NET process that runs the web application will run both the applications (you can find the process name in the task manager as aspnet_wp.exe in Windows XP or as w3wp.exe in Windows 2003. Each application will have its own AppDomain including its Cache, Application, and Session objects.
BUT, the code of the same application runs under the same process!

What about static members or shared classes? In this case, each ApDomain will have its own copy of the static members (fields), but of course, the data and code is not shared and will be held safely isolated and inside of a boundary provided by the AppDomain.

Load some new assemblies..
Suppose you want to load an updated dll into the application folder or subdirectory, the ASP.NET runtime will recognize it and and will start a new AppDomain because it cannot insert it to the running AppDomain, the result is that running requests will finish its work and after it they will work against the new AppDomain that holds the new dll and executing code.

Last word...
I think that one of the good adventages of the AppDomain is that you can allocate the wanted memory for your application (under its AppDomain) as much as you want (bounded by the process capability of course) and if there is a runtime crash, the rest of the applciations that runs over the current process will not crash.

I will glad to hear some comments and additions... :)

Thursday, November 23, 2006 3:03:00 PM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Thursday, August 24, 2006

Hey guys, how are you these hot and exhausting days?? I am pretty OK, except for the heat...

If you didn't hear about the ReSharper yet, some words about it:
ReSharper is an add-on to Visual Studio 2003 and 2005, It comes equipped with a rich set of features that greatly increase the productivity of C# and ASP.NET developers. With ReSharper you get intelligent coding assistance, on-the-fly error highlighting and quick error correction, as well as unmatched support for code refactoring, unit testing, and a whole lot more. All of ReSharper's advanced features are available right from Visual Studio.

This add-on includes features like: Error Highlighting and Quick-Fixes, Advanced Coding Assistance, Numerous Refactorings, Navigation and Search, Unit Testing, ASP.NET Editing, NAnt and MS Build Scripts Editing. More details you could read in the link attached below.

So, you can try using this good add-on by downloading a 30-day evaluation from the jetbrains.com site here, and if you will like it I suggest to buy it (on your company account of course :))

Thursday, August 24, 2006 9:21:38 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Monday, August 21, 2006

Hello!

Yesterday while surfing on the web in purpose to find some interesting tutorials and innovations, I encountered with a nice article that has been written by my college's mate, Evyatar Ben-Shitrit.

This control derives from ListBox, supports a horizontal scroll bar, and yet behaves like the ASP.NET ListBox control. More in this article, he explains the creation of the ScrollableListBox custom control (written by him).

So, I recommend reading this one at thecodeproject web site here.

Fare well...

Monday, August 21, 2006 2:31:30 PM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Wednesday, August 2, 2006

Hello!

I am still working on a big web application in work. I will glad to tell you about the application, but this is for other conversation. I am glad to say that we are close to the end of the project and doing now last fine tuning on it.

The thing that I had to deal with for the last days is all the issue with publishing application errors in orderly fashion to the event viewer. The reason of doing it is to get the ability of tracking in runtime, bugs, errors or exceptions that can be appear while the application is in production. In this case we don't have the CLR debugger to find what was wrong (if something happend of course...), so we must publish the exception to the system's event viewer or just  to a simple Log file (which less recommended then publishing to the system's event viewer).

Now, to the implementation (the imporatnt thing!!!)

In order to publish error to the event viewer, we need to use the Microsoft.ApplicationBlocks.ExceptionManagement assembly of Microsoft. This assembly expose us all the publishing tools that we will need to publish an errors (and more...).

In my web application, in global.asax file, in Application_Error method, I wanted to publish the exception to the event viewer. It is very important to do it there, because in every application error, like runtime errors, exceptions and actions that the application and systme doesn't know to deal with, this method is being called (by the application of course).

Now, before publishing the error to the event viewer, you need to distinguish between the different exceptions. Do it with your own information about every exception that is happening but, it is important to know that also in every response's redirect (Response.Redirect (" ... ", true) or server's transfer (Server.Transfer (" ... ", true) an ThreadAbortException is being raised.

Exception lastError = Server.GetLastError();

if (Server.GetLastError() is ThreadAbortException || lastError.InnerException is ThreadAbortException)
{
   // Eat the exception - caused by Response.Redirect(..., true) or Server.Transfer(..., true).
   Microsoft.ApplicationBlocks.ExceptionManagement.ExceptionManager.Publish(lastError.GetBaseException());
   Server.ClearError();
}
else
{
   Microsoft.ApplicationBlocks.ExceptionManagement.ExceptionManager.Publish(lastError.GetBaseException());
   Server.ClearError();
   Server.Transfer("~/Error.aspx", false);
}

By this example you can see the publish exceptions handling.

Now, do not forget to declare in the web.config file the appliation name and the exceptions pulishing handling:

<exceptionManagement mode="on">
        <publisher assembly="Microsoft.ApplicationBlocks.ExceptionManagement" type="Microsoft.ApplicationBlocks.ExceptionManagement.DefaultPublisher" applicationname="APPLICATION_NAME"/>
</exceptionManagement>

One more thing... you need to register this assembly with the appliation name in the registry in purpose to let the application all the rights to publish the error in the event viewer, if you won't do it, the system won't let you write to the event viewer and you will get the exception: The event source ExceptionManagerInternalException does not exist and cannot be created with the current permissions. security exception and you will spend planty of time trying to solve it :) (like me...)

How to register this to the registry you ask?

Open notepad and write there this code:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Application\APPLICATION_NAME]
"EventMessageFile"="C:\\WINDOWS\\Microsoft.NET\\Framework\\v2.0.50727\\EventLogMessages.dll"

Save this file with .reg extension and double click on it, this will register this to the system's registry.

So, bye for now...

Wednesday, August 2, 2006 7:47:28 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Tuesday, July 11, 2006

Didn't I tell you that I love .Net 2.0? If I didn't I am saying it now...

This is new a method in .NET 2.0, but who plays with it should know it by now. Instead of working hard trying to sort a collection (or a list which is a descendant of it) of some entities, now you have the Sort method, which does it for FREE and in better performence.
Any case I decided to show a simple example to whom that doesn't know it or just want to be impressed again from it.

* Comment: I assumes in this post that you are familier with Generics and delegates.

Let's assume that you have an entity structured like:

public struct Entity
{
   private int _id;
   private string _name;

   // and more...

   public int Id
   {
      get{return _id;}
      set{_id = value;}
   }
   public string Name
   {
      get{return _name;}
      set{_name = value;}
   }
}

Now, suppose you have a list of these entities and you want to sort it by their name and display it sorted. Follow the example code below and see how it easy using anonymous delegate:

//sortDirecion is a global variable that determines the sort direction
int sortParam = sortDirection == "Ascending" ? 1 : -1;

entitiesList.Sort(new Comparison<Entity>(
   delegate(Entity e1, Entity e2)
   {
      return sortParam * e1.Name.CompareTo(e2.Name);
   })
};

Nice & easy that's it (with no sofisticated actions).

Some more tutorials about anonymous delegates (or delegates in general) you can find in Oren Elenbogen's (a team leader in my department) blog here.

So, be well.
p.s. Again... I will be glad to head some comment or sharpening.

Tuesday, July 11, 2006 9:19:32 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Thursday, June 29, 2006

Hello!

Long time no posted new post to the blog regarding the hard work on Tigers project at work, but while programming and programming, I encoutered with new asp.net 2.0 Page property that called Previous Page.
This property holds all the web controls with its data and infromation of the previous page of the current page that you are in.

I found this property very useful and it can do you 'Easy Life' while getting data from the previous page that you came from (or the user that uses your application).

For example: suppose you want to use specific information from the page that you are just have been redirected from, like search term text. In the 'old fasioned' way, you needed to save this data in the Session variable or to send it by the query string of the url address and it will be expose to everyone, but PreviousPage property comes to avoid this ways by saving all the previous page data by looking for the specific control that holds the data, For example:

if (Page.PreviousPage != null)
{
   if (Page.PreviousPage.FindControl("txtSearchTerm") != null)
   {
      string term = ((TextBox)Page.PreviousPage.FindControl("txtSearchTerm")).Text;

      //do your thing with this data...
   }
}

Here I checked if there is a previous page and if it contains the txtSearchTerm Textbox controls, grab its data and use it.

Is it nice or not?

Thursday, June 29, 2006 7:38:26 PM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Monday, June 19, 2006

Hey all!

While developing the Data Access Layer (DAL) in my "home developing" project - Haverut.co.il I deliberated wheather to use the old fashioned way - the Application Blocks of Microsoft, which I need to implement all the database contact by hand and use the 'jacket' of the Application Blocks adapters OR to use Typed datasets...
I decided to use the second choice, because (as knows) this module knows to generate code of database tables and to create by default the main CRUD (Create, Read, Update and Delete) methods and also custom SQL queries.

But this is not the main issue of this post...

While working, I needed to do several connected commands against the database, therefore I needed to use the Transaction term to knot these commands and to avoid database's commands failure or 'half-way actions'.

Now, the biggest deliberation: To use SQLTransction module or TransactionScope module (which is new module that cane out in .NET 2.0)?

In the first thought I decided to use TransactionScope, because it's very easy to use and it doesn't make "pain in the neck". The usage it easy: you need to wrap the wanted scope with this module and all the job will be done safely under this transaction.
further documentation you can find here:

But this module holds not much of disadvantages like:

  • Low performence of this action in the application, in large amount of users and actions the performance of this actions will be very bad and slow.
  • By default, when using this module, the system tries to look for a transaction that is otherwise current, or a TransactionScope object that dictates that Current (a static property of this namespace) is null. If it cannot find either one of these, System.Transactions queries the COM+ context for a transaction. Note that even though System.Transactions may find a transaction from the COM+ context, it still favors transactions that are native to System.Transactions. This thing is not recommended because we need to handle the COM+ context in addition to our application context. More info you can find here.

Because of that, I decided to use SQLTransactions over my Typed Datasets' actions.

To do this in appropriate way, I used a partial class (very nice innovation in .NET 2.0) with the same name of the Typed DS class, to 'continue' its code and overload some of the members like the main member that does the connection with the database: _adapter.
This member is private and is not accessible to outside requests.

Instead of a BeginTransaction method, I have implemented a Transaction property on my TableAdapters, like this:

partial class CitiesTableAdapter
{
   public SqlTransaction Transaction
   {
      get { return _adapter.SelectCommand.Transaction; }
      set
      {
         if (_adapter == null)
         {
            InitAdapter();
         }

         _adapter.InsertCommand.Transaction = value;
         _adapter.UpdateCommand.Transaction = value;
         _adapter.DeleteCommand.Transaction = value;
      }
   }
}

This property assigns the given transaction to the Transaction property on all its commands. Now I can do CRUD method as I like with knowing that is under SQLTransaction control:

CitiesTableAdapter citiesAdapter = new CitiesTableAdapter();

citiesAdapter.Connection.Open();
try
{
   SqlTransaction trans = citiesAdapter.Connection.BeginTransaction();
   try
   {
      citiesAdapter.Transaction = trans;

      // CRUD the table, commit transaction or rollback if there's a problem

 

Nice way of implementing, hope it helped someone...

See you

Monday, June 19, 2006 8:46:54 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Monday, June 12, 2006

Hi!

Yeasterday, while working on a big web application project, I wanted to copy System.Collections.Generic.Dictionary<TKey, TValue>'s values to a System.Collections.Generic.List<TValue>.

Instead running a loop over the Dictionary<T, K> and copy its values to the List<T>, I used the AddRange(TValues) method.
For example:

Dictionary<int, Job> myDictioanry = new Dictionary<int, Job>();

//Fill the Dictionary...

List<Job> listJobs = new List<Job>();
listJobs.AddRange(myDictioanry.Values);

Hope it will help someone...
see you later

Monday, June 12, 2006 8:01:28 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Thursday, June 8, 2006

I am developing from this month a new web application project, and I think this will be one of the best *** sites on the web (*** means that the plan is still secret). The site will be called Haverut and it will be hosted on: www.haverut.co.il or www.haverut.com.

Now, as you know (or not), Microsoft has a nice Custom Membership and Roles Providers that generates very comfortable SQL Server database to work with and very comprehensive API to communicate with that enable you to manage all the membership and roles issue.

In my first thought, I was planing to build the pilot of my application to work with MS Access database, and ofcourse I wanted to use the Custom Membership and Roles Providers that Microsot supplies. After long searches on the web I found a msi file (you can dowload it here) that after you install it on your machine, it opens you a new template possibility in VS 2005 that called 'ASP.NET Access Providers'. This template generates you automaticly the Access database with all its tables and queries.

But, on the second thought I decided to build the database on SQL Server from the begining (much more secure and efficient).  

A very good acticle and tutorials references you can find at Scott Guthrie's blog at: http://weblogs.asp.net/scottgu/archive/2006/02/24/438953.aspx

See you...

(P.S. - If I didn't mentioned, the Custom Membership and Roles Providers is exist in the first place for SQL Server and you can activate with the aspnet_reqsql  command from the Visual Studio 2005 Command Prompt).

Thursday, June 8, 2006 8:13:15 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Tuesday, June 6, 2006

Yesterday noon, in a middle of our department's technical conversation, I raised the issue of the difference between IHttpHandler and IHttpModule models, because I'm using an IHttpHandler logic model in a part of the application I working on.

Also I want to clarify OrenEllenbogen's (one of our team leaderes) findings. He claims, correctly, that the clothing of an application (Global.asax) is the IHttpModule that "hosts" several IHttpHandlers. Now, by the specific request that goes through the IHttpModule firstly, it knows to navigate it to the specific handler (IHttpHandler) to handle it.

Here, I want to go into details about those two nice and essential models.

HttpHandler - Every typical page (that derives from System.Web.UI.Page) implements the IHttpHandler interface. Writing an IHttpHandler is no different that writing typical page or control. It includes server application objects like: Session, Request and Response. An HttpHandler is created for each request to the server and its lifetime exists on the request ProcessRequest step. (It important to mention that this action happens before the page events are raised (like Page_Init, Page_Load etc...)

Another thing to know about HttpHandler behavior is the IsReusable property that defined in the interface. If this retured as true, the HttpHandler won't be destroyed when a control exits ProcessRequest - it will be released to the pool for future requestor. This means that request specific data must be de-initialized at the end of the request, or re-initialized at the begining of a request.

Nice example of use is Url Rewriting. We used a handler in previous a web application development that handles multi-lingual states (supports English and Hebrew languages). We wrote an HttpHandler that in every request to the server, checks the browser url, and by a specific address symbols "guides" the page what language adn direction to display.

HttpModule - The HttpModule is the filter for all requests. It receives notification at various processing points during the lifespan of the request. We can map HttpModule to all application requets.

The main difference between HttpModule to HttpHandler is that HttpModule provides class intance for all application's requests and HttpHandler provides single instance for each request. Also, HttpModule doesn't store any data about any request because it handles all the application requests, opposite to HttpHandler that can store data about a specific request (IsReuseable property, remember...?)

An important adventage of HttpModule over HttpHandler is by the initializing and maintain an application state option, since there is alive class intance all along the application lifespan. For example, you can load a data structure (like XML string for example) in the Init method and use it safely accross the apllication lifespan.

Hope I answered some questions (if you had some...)

A nice implementation example you can find in MSDN or here.

 

Tuesday, June 6, 2006 9:29:29 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Wednesday, May 31, 2006

Hello all...

After long evening of "Mangaling" and mingling at company party in 'GolfiTech' Tel Aviv yesterday, I came up to work this morning and decided to share you with Master Page working.

In the current project that we are working on, we are using MasterPage (new feature in asp.net 2.0). This is a great control that does the work like 'header' and 'footer' user controls, for an example, (that are customied by the user), it comes to let us better and easier unified design to all (ao some of) the application pages.

Good tutorials and example how to create Master Page you can find: here.

And now to the issue of this post. In some of the application that I am working on, in addition to the super master page, there is a nested one. Now, when I tried to get the control (from the server side ofcrouse) that sits in the nested master page I couldn't find it, and this was a problem ofcourse... :)

The solution is to go up until you reach the super master page and after it to go down to the nested master page where the control is. The syntax is:

Page.Master.Master.FindControl("<Container Name>").FindControl("<Control Name>");

Where the <Container Name> is the placeholder that holds the page.

Hope this will help someday...

Wednesday, May 31, 2006 8:32:55 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Wednesday, May 24, 2006

Hello all again...

Some monthes ago, while reading professional tutorials, I encountered with a nice implementation (maybe some of you encountered it before, but I think this is a nice syntax to know) and it called Smart Indexer of a specific object.

Look at the example below:

class EranEntity
{
   private Hashtable qualities = new Hashtable();

   // Indexer (i.e. smart array) 
   public object this[string key]
   {
      get { return qualities[key]; }
      set { qualities[key] = value; }
   }
}

Here, I've created new class called, 'EranEntity' and it contains a Hashtable that holds all of my qualities (and I have a lot of it, as you know... :)). By building this property, it will be much easier to get quality or to update one using 'Dictionary' - the interface is much more readable and easier to use.

Now, lets set some qualities:

public static void Main()
{
   // Create EranEntity instance
   EranEntity entity = new EranEntity();

   entity["smart"] = "very good…";
   entity["nice"] = "very nice…";
}

Today in the new world of terms of .NET 2.0, we can also implement this genericly using the new Generics mechanism.

p.s.

Another thing, there is a VS. 2005 quick shortcut to do it by typing the word: indexer (in the editor ofcourse...) and clicking the Tab key until the whole bunch of code is being generated for you to edit.

 

See you for now...


 

Wednesday, May 24, 2006 7:54:13 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback
# Monday, May 22, 2006

Hello all!

This is my first blog's post, I am thrilled to start writing my thoughts, and I have a lot of them (I think....).

In my first post, I will go into details of 'Publish website in .NET 2.0', this issue is different and renewed then .NET 1.1 - there are several new issues that someone that is not familiar with .NET 2.0 will be glad to hear, so lets start...

As known, publishing web application under visual studio 2005 platform is different than publishing the same application under visual studio 2003 platform.

In VS 2003, when we had wanted to deploy a site (web application) to production area, we had need go to the Project tab and select the Copy Project option, this action had gathered all the web site execution's necessery files and copied it to a new folder.

In VS 2005, we got some innovations that gives us several alternatives to deploy web site to production area. The main action to deploy a web application to deployment is Publish, to do this you need to mark the web site project, press the right button and choose Publish Web Site. When you do this action, the application will be precompiled and it will be saved in the specified folder that you noted.

Other actions that you can determine while doing the publish action are:

  1. Allowing the precompiled web site to be updatable - by marking this checkbox, all the ASPX files with their markup intact as well, will be precompiled into a single DLL (under the /bin folder), that represents it.
  2. Using fixed naming  and single page assemblies - choosing this action and the former one, will create a single DLL file to each ASPX file.
  3. Enabling string naming on precompiled assemblies - by choosing this option, you will need to give strong name to the generated assembly(ies), this name (key) will deffer it from newer same name assemblies that precompiled with no strong name or without any.
  4. Lastly, if you will publish the web site with none of the checkboxes selected, a single DLL will be created with no markup of the ASPX files and compiled metadata files for each ASPX file.


These actions will be pointed to the aspnet_compiler tool. This tool, supplies a creation of debugging symbols, but this action will not be able to happen from publish action, because if supplies precompilation in release state without symbols. (You can read more about aspnet_compiler here: http://odetocode.com/Articles/417.aspx).

In order to attach symbols to to web application, there is new tool named: Web Site Deployment (WSD). You can download it from this address: http://msdn.microsoft.com/asp.net/reference/infrastructure/wdp/default.aspx. This tool adds several compilations configurations in release and degub mode.

In a nutshell, the steps of adding symbols by this tool are:

  1. Mark the web site application, right click on you mouse and choose the Add Web Deployment Project option.
  2. Select the name and the location of the web site to deploy.
  3. Now, a deployed project had been added to you solution! Mark it and by right clicking, select the Property pages option.
  4. Mark Generate debug information and finish.
  5. Build the whole application.

After these actions, a new folder with the given name by you will be created, and symbols will be 'stuck' to the application. (Real life example is: In the production server, you can see the souce code line while a runtime error will happen, good thing right...?)

Related links are:

http://download.microsoft.com/download/1/5/4/1541980a-d8fc-407b-8c9f-c2df5445b041/Using%20web_deployment_projects_final.doc

Good article about this post: http://odetocode.com/Blogs/scott/archive/2005/11/15/2464.aspx

That's it for now. I will be glad to hear your comment about this (this is my first one, don't forget).

See you here...

 

 

Monday, May 22, 2006 8:37:57 AM (Jerusalem Standard Time, UTC+02:00)  #    Disclaimer  |   |  Trackback