Thursday, 13 August 2009
PerCall - No Session - It is all confusing
http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/0739e712-2abe-4b10-8667-3de5c43e4552
Spring.Net - AutoProxy
AutoProxy is an important technique offered by Spring.Net. Read below forum discussion
http://forum.springframework.net/showthread.php?t=5655
http://forum.springframework.net/showthread.php?t=5655
AutoMapper
Disclaimer - I have been using AutoMapper only for two weeks, so i might be wrong here
So, This tool avoid lot of pain from writing mapping codes. But being said that it is not a mature tool and will be a pain in butt incase of even medium level hierarchies.
With the verion 0.3.1(from Code plex),
1. You can't use abstract relation ship - though r116(latest as of now) solves this problem. What i mean here is
If you map 'Dog' to 'Dog' with a code like .Map(new Source.Dog()), it fails.
But if Source.Animal is interface, then it works. What crap?
2. Not perfect with collections(r116)
But being said that, if you have classes with simple/no relationship, this is a great tool.
So, This tool avoid lot of pain from writing mapping codes. But being said that it is not a mature tool and will be a pain in butt incase of even medium level hierarchies.
With the verion 0.3.1(from Code plex),
1. You can't use abstract relation ship - though r116(latest as of now) solves this problem. What i mean here is
If you map 'Dog' to 'Dog' with a code like .Map
But if Source.Animal is interface, then it works. What crap?
2. Not perfect with collections(r116)
But being said that, if you have classes with simple/no relationship, this is a great tool.
Load n Performance - Final Post
Ahh, I never thought i would write this post so soon. Our Load n Performance plan caught messed after 2 months, but good learning for me.
Bottom line is, 'Not always think out of the box'. What i mean here is, one of the considered idea is 'Server Affinity with inproc caching'. Obviously this got some effects, but offers great performance. Probably a night mare for Operations team. Somewhere someone told us not to use 'Server Affinity', they are just no more than authors i believe.
Bottom line is, 'Not always think out of the box'. What i mean here is, one of the considered idea is 'Server Affinity with inproc caching'. Obviously this got some effects, but offers great performance. Probably a night mare for Operations team. Somewhere someone told us not to use 'Server Affinity', they are just no more than authors i believe.
Technology - Politics
Unusual post from me -
Being sound in technology is important, but it will not be always a success factor for selling your ideas. So what else is important
1. Support your team always - like they say 'Never abandon your wing men'.
2. Establish good relationship with so called 'experts', but never belive them. My company follow a common but strange team proposition. Out of 100%, 30% will be contractors. They are some good guys, but most of them wants to delay the project for more money(daily wages). These contractors call themselves as experts (Apologize if i harmed someone here).
3. Don't afraid to piss of people or have 'kick ass' policy.
I don't know what else to say here, but i learned these things.
Being sound in technology is important, but it will not be always a success factor for selling your ideas. So what else is important
1. Support your team always - like they say 'Never abandon your wing men'.
2. Establish good relationship with so called 'experts', but never belive them. My company follow a common but strange team proposition. Out of 100%, 30% will be contractors. They are some good guys, but most of them wants to delay the project for more money(daily wages). These contractors call themselves as experts (Apologize if i harmed someone here).
3. Don't afraid to piss of people or have 'kick ass' policy.
I don't know what else to say here, but i learned these things.
Monday, 22 June 2009
Load and परफॉर्मेंस - पोस्ट 2
Learnings so far
1. Don't always look for big fish - because our BUILD team deployed a DEBUG build in test environment:).
more to follow
1. Don't always look for big fish - because our BUILD team deployed a DEBUG build in test environment:).
more to follow
Load & Performance - Post 1
We are having a strange(as of now) problem in our performance test environment.
Few facts
1. First time we have opted for virtual machines over physical.
2. There are 6 web servers in DMZ, 3 APP Servers(all WCF services and fire wall between Web and APP) and 6 DB servers(again firewall between App and DB).
3. As usual VIP in front of APP and DB servers for load balancing.
when we ran tests, all seems to be going haywire(with responses over a minute). So we wanted to narrow down the problem and performed below steps
1. One on one testing (1 Web, App and DB), but all on virtual environment. This doesn't yield much improvement.
2. 1 Web(VM) against 1 APP(Physical), this allows us to near benchmarks.
3. 1 Web(Physical) against 1 APP(VM), again bad result
and much more combinations like this
So we concluded
1. APP(VM) is the problem, as physical server yielded good performance.
2. Cannot be code base, afterall it is the same code performing well in physical server.
We looked at the event logs(Web servers, PIX(CISCO Firewall)) and found out below maximum occured errors(first 3 are related to WCF)
1. EndpointNotFound exception
2. Timeout exception
3. Protocol exception
4. TCP 10048 error(from PIX).
Above errors are thrown randomly and it depends upon configuration we are testing against. I mean
with VIP - we get only EndpointNotFound exception
without VIP - all others
Before i give my suggestions, this is my project in L & P(Disclaimer) and still it is yet to be applied.
1. EndpointNotFound exception
Simply due to caching nature of WCF + Unplanned testing.
First, .net caches previously used TCP sockets.
Second, Testers simply disables certain APP servers and continue to test thinking that it will work out of the box.
So what happens if cached socket points to disabled APP server.
2. Timeout exception
Nothing much to discuss here, due to default value(10) set for 'maxConcurrentSession'. Increased to 1000(though this didn't solve our problem)
3. Protocol exception
No Clue
4. TCP 10048 error(from PIX).
TCP Port exhaustion, all of the temporary 4000 ports are getting used. You can find plenty of articles re this.
We planned to reduce the TIME_WAIT of TCP connections from 4 minutes(default) to 2 seconds. It is a quick win.
One thing i don't understand is, why port exhaustion only on VM?
I will update this post once i have definitive answer.
Few facts
1. First time we have opted for virtual machines over physical.
2. There are 6 web servers in DMZ, 3 APP Servers(all WCF services and fire wall between Web and APP) and 6 DB servers(again firewall between App and DB).
3. As usual VIP in front of APP and DB servers for load balancing.
when we ran tests, all seems to be going haywire(with responses over a minute). So we wanted to narrow down the problem and performed below steps
1. One on one testing (1 Web, App and DB), but all on virtual environment. This doesn't yield much improvement.
2. 1 Web(VM) against 1 APP(Physical), this allows us to near benchmarks.
3. 1 Web(Physical) against 1 APP(VM), again bad result
and much more combinations like this
So we concluded
1. APP(VM) is the problem, as physical server yielded good performance.
2. Cannot be code base, afterall it is the same code performing well in physical server.
We looked at the event logs(Web servers, PIX(CISCO Firewall)) and found out below maximum occured errors(first 3 are related to WCF)
1. EndpointNotFound exception
2. Timeout exception
3. Protocol exception
4. TCP 10048 error(from PIX).
Above errors are thrown randomly and it depends upon configuration we are testing against. I mean
with VIP - we get only EndpointNotFound exception
without VIP - all others
Before i give my suggestions, this is my project in L & P(Disclaimer) and still it is yet to be applied.
1. EndpointNotFound exception
Simply due to caching nature of WCF + Unplanned testing.
First, .net caches previously used TCP sockets.
Second, Testers simply disables certain APP servers and continue to test thinking that it will work out of the box.
So what happens if cached socket points to disabled APP server.
2. Timeout exception
Nothing much to discuss here, due to default value(10) set for 'maxConcurrentSession'. Increased to 1000(though this didn't solve our problem)
3. Protocol exception
No Clue
4. TCP 10048 error(from PIX).
TCP Port exhaustion, all of the temporary 4000 ports are getting used. You can find plenty of articles re this.
We planned to reduce the TIME_WAIT of TCP connections from 4 minutes(default) to 2 seconds. It is a quick win.
One thing i don't understand is, why port exhaustion only on VM?
I will update this post once i have definitive answer.
Thursday, 19 February 2009
How enterprise library caches the SP Signatures
I like to share few facts about how enterprise library data access block works on caching the SP signatures.
1. Every class of type ‘Database’ is related to another class called ‘ParameterCache’(relationship is Composition), and this class is mainly responsible for caching the SP signature. ‘SQLDatabase’ is derived from ‘Database’. We don’t need any configuration stuff to enable this, it automatically does.
2. ‘ParameterCache’ internally uses synchronized HashTable for storing the SP Singature’s with ‘ConnectionString:SP Name’ as keys.
3. ‘Database’ class instantiates this ‘ParameterCache’ as static member and this means it scope is limited to AppDomain.
public abstract class Database : IInstrumentationEventProvider
{
static readonly ParameterCache parameterCache = new ParameterCache();
No matter how many SQLDatabase objects we create, only one ParameterCache will exist.
4. Before we execute any SPs, we need to call a function called ‘GetStoredProcCommand’, this function will retrieve the SP Singatures if it is available in Cache else from DB and stores it in Cache.
public virtual DbCommand GetStoredProcCommand(string storedProcedureName,
params object[] parameterValues)
{
parameterCache.SetParameters(command, this);
And this is how I figured it
1. I ran SQL Profiler, and the code that retrieves SP Signature (in this case DataAccessHelper.AssignParameters(), this function internally calls GetStoredProcCommand), I am able to see an entry for ‘[sys].[sp_procedure_params_managed]’. But when I hit it next I am not able to, and it means Cache.
2. To make sure Cache is getting populated, I checked the hash table count before and after execution. It was 0 and 1 respectively.
1. Every class of type ‘Database’ is related to another class called ‘ParameterCache’(relationship is Composition), and this class is mainly responsible for caching the SP signature. ‘SQLDatabase’ is derived from ‘Database’. We don’t need any configuration stuff to enable this, it automatically does.
2. ‘ParameterCache’ internally uses synchronized HashTable for storing the SP Singature’s with ‘ConnectionString:SP Name’ as keys.
3. ‘Database’ class instantiates this ‘ParameterCache’ as static member and this means it scope is limited to AppDomain.
public abstract class Database : IInstrumentationEventProvider
{
static readonly ParameterCache parameterCache = new ParameterCache();
No matter how many SQLDatabase objects we create, only one ParameterCache will exist.
4. Before we execute any SPs, we need to call a function called ‘GetStoredProcCommand’, this function will retrieve the SP Singatures if it is available in Cache else from DB and stores it in Cache.
public virtual DbCommand GetStoredProcCommand(string storedProcedureName,
params object[] parameterValues)
{
parameterCache.SetParameters(command, this);
And this is how I figured it
1. I ran SQL Profiler, and the code that retrieves SP Signature (in this case DataAccessHelper.AssignParameters(), this function internally calls GetStoredProcCommand), I am able to see an entry for ‘[sys].[sp_procedure_params_managed]’. But when I hit it next I am not able to, and it means Cache.
2. To make sure Cache is getting populated, I checked the hash table count before and after execution. It was 0 and 1 respectively.
Enterprise Library Vs Log4Net
I tried to evaluate Log4Net 1.2.10 and Enterprise library Logging Application block 4.0 from development perspective with following things in mind
1. I should be able to log an application state in a simpler and consistent way.
2. Logging shouldn’t be an overhead for me.
3. I should be able to log anything anywhere.
4. Logging shouldn’t be an overhead for application.
And what I didn’t consider is
1. Extensible i.e., writing custom loggers or extending the available loggers. Because the main reason for using these application blocks is to reduce the development/testing time. If I am going to extend, well then I write my own logging application block.
2. Support for output mediums other than DB, File and Event Log.
We take EntLib first,
I started reading the Microsoft documentation and to be frank in few minutes time I got exhausted with details I need to know about logging.
I created an application that consumes logging block and luckily they provided us with a graphical configuration tool that took some pain away, but indeed it tells us that configuration is cumbersome in enterprise library blocks.
Ok, then code for logging look something like below
LogEntry logEntry = new LogEntry();
logEntry.EventId = 100;
logEntry.Priority = 2;
logEntry.Message = "Informational message";
logEntry.Categories.Add("Trace");
logEntry.Categories.Add("UI Events"); //This is for filtering the messages.
Logger.Write(logEntry);
It is not mandatory to pass EventID or Priority, but it might be useful when we want to give some context to the log entries.
And enterprise libraries integrate very well.
Then we come to Log4Net,
Logging is really simple in terms of understanding the concept and implementation, still configuration is somewhat messy but it is better than Logging application block because I am able to understand them better.
And Code is really simple
Logger.Info("message"); //overloads are available to pass object with exception details etc.
Apart from these, whatever we can do in Logging application block, same can be done in Log4Net. But there is something more as well
Log4net comes with an appender called ‘BufferingForwardingAppender’ it means we can buffer the log entries for a configured count before they are flushed out.
Microsoft have similar stuff (I think they copied this), but we can only turn on/off the ‘AutoFlush’, but manually we have to flush it (code).
Object management is well handled by Log4Net framework; it means we don’t need to pass the logger object across components.
Several websites points out that Log4net is faster than Logging application block, though those results depend on various factors but ultimately we want our application to run as fast as possible.
Finally, Logging application block 4.0 comes with WCF support etc, but if it can’t do what it is intended to do in a simple and effective way, who cares about these extras.
The above are my observations and I didn’t conclude anything by referring any websites. Even though I support Microsoft strongly, I am against them in this particular case.
1. I should be able to log an application state in a simpler and consistent way.
2. Logging shouldn’t be an overhead for me.
3. I should be able to log anything anywhere.
4. Logging shouldn’t be an overhead for application.
And what I didn’t consider is
1. Extensible i.e., writing custom loggers or extending the available loggers. Because the main reason for using these application blocks is to reduce the development/testing time. If I am going to extend, well then I write my own logging application block.
2. Support for output mediums other than DB, File and Event Log.
We take EntLib first,
I started reading the Microsoft documentation and to be frank in few minutes time I got exhausted with details I need to know about logging.
I created an application that consumes logging block and luckily they provided us with a graphical configuration tool that took some pain away, but indeed it tells us that configuration is cumbersome in enterprise library blocks.
Ok, then code for logging look something like below
LogEntry logEntry = new LogEntry();
logEntry.EventId = 100;
logEntry.Priority = 2;
logEntry.Message = "Informational message";
logEntry.Categories.Add("Trace");
logEntry.Categories.Add("UI Events"); //This is for filtering the messages.
Logger.Write(logEntry);
It is not mandatory to pass EventID or Priority, but it might be useful when we want to give some context to the log entries.
And enterprise libraries integrate very well.
Then we come to Log4Net,
Logging is really simple in terms of understanding the concept and implementation, still configuration is somewhat messy but it is better than Logging application block because I am able to understand them better.
And Code is really simple
Logger.Info("message"); //overloads are available to pass object with exception details etc.
Apart from these, whatever we can do in Logging application block, same can be done in Log4Net. But there is something more as well
Log4net comes with an appender called ‘BufferingForwardingAppender’ it means we can buffer the log entries for a configured count before they are flushed out.
Microsoft have similar stuff (I think they copied this), but we can only turn on/off the ‘AutoFlush’, but manually we have to flush it (code).
Object management is well handled by Log4Net framework; it means we don’t need to pass the logger object across components.
Several websites points out that Log4net is faster than Logging application block, though those results depend on various factors but ultimately we want our application to run as fast as possible.
Finally, Logging application block 4.0 comes with WCF support etc, but if it can’t do what it is intended to do in a simple and effective way, who cares about these extras.
The above are my observations and I didn’t conclude anything by referring any websites. Even though I support Microsoft strongly, I am against them in this particular case.
Subscribe to:
Comments (Atom)