Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

Thursday, May 19, 2011

Role-based permissions vs. Enterprise permissions

The problem


My application has multiple features. For example:
1) View resume.
2) Sending message to resume poster.
3) Viewing contact information on a resume.
My application has multiple users. For example:
1) Anonymous
2) Fred Lurker
3) Paul Generous
I want to allow some users to have access to one set of features and other users to have access to another set of features.
For example I want:
1) Anonymous users to have ability to view resume
2) Fred Lurker to be able to view resume and to send messages to resume poster.
3) Paul Generous to be able to see resume, send messages, and be able to see contact information on the resume.

Direct Permissions Mapping


The most direct approach would be to make every feature check a user who tries to access it. For example:
If (User is “Fred Lurker” or User is “Paul Generous”)
{
Allow user to send a message.
}
If (User is “Paul Generous”)
{
Allow user to view resume contact information.
}
That’s very direct and simple, but does not scale at all. My application has thousands of users, new users are added to the system, old users are deleted, and existing users are getting access more or less features over time. In order to deal with all that code would have to constantly change which is not feasible.

Role-based Permissions


Better approach would be to introduce roles. For example:
1) “Recruiters” role that has access to “Sending message to resume poster” feature.
2) “PayingUsers” role that has access to “Viewing contact information on a resume” feature.
Then:
1) Add “Fred Lurker” user to “Recruiters” role.
2) Add “Paul Generous” user to “PayingUsers” role.
With such approach software developers would define what roles have access to what features. They would define it in code (e.g. in C#, C++ or Java).
Application administrator would define roles that users have access to.
When Paul Generous pays his membership fee, administrator would simply add Paul to “PayingUsers” role. Administrator does not need to explicitly define what features Paul would have access to, because it’s already defined by developers in application code for “PayingUsers” role.
To summarize:
1) Roles introduce one intermediate step in mapping users to features.
2) Developers map roles to features
3) Administrators map users to roles.

Enterprise Permissions


What if I want my application to be more flexible and allow administrator to map groups of users to features without asking developers to modify code?
That could be setup like this:
1) My enterprise system would still have Users.
2) I’d add Groups, so administrator would be able to add users to groups.
3) Developer would add Privileges, so administrator would be able to map groups to privileges.
4) Developer would code features in such a way that one feature would be mapped to one privileges in my code:
If (user.HasPrivilege(“ViewResume”))
{
ShowResume()
}
5) Administrator would create and delete Groups and map these groups.
Enterprise Permissions system gives lots of flexibility to application administrators. It’s very appealing for IT department management to be able to tweak users’ permissions without need to wait for developers releasing new version of the app. That’s why such enterprise permissions systems are so popular.

Disadvantages of Enterprise Permissions


Unfortunately in real life flexible enterprise permissions system causes nothing but pain.
Here’s why:
1) With any flexibility comes added complexity.
Having “Users-Groups-Privileges-Features” chain instead of shorter “Users-Roles-Features” chain – significantly complicates the number of possible combinations of how permissions to access features can be granted to end users.
That means permissions could be granted to a user in several different ways through multiple groups. So it’s hard to revoke user’s permissions if necessary simply because it’s hard to figure out what permissions does the user really have.
2) It’s very hard for administrators to grasp what group should map to what privileges.
Administrator focuses on the end user. Administrator knows what groups user should belong to. But administrators have only vague idea about what privilege allows user to do in the application.
End result: developer is setting up the permissions anyway.
3) Enterprise permissions are much harder for developers.
Developers know what features should be available to what role (see Role-based Permissions above). Developers can map features to roles in their code.
Can developers map features to privileges and then map privileges to groups?
Yes, they can. But it’s harder. It’s more work. It requires both coding in C#/Java and scripting in SQL. Or even worse approach – mapping groups to privileges in the UI (error-prone deployment nightmare).
Administrators can setup new groups that developers do not know about. Administrators can delete groups that developers originally created. All that can quickly bring system to its knees. That’s why in the end administrators are afraid to create and delete groups and that defeats original purpose of enterprise system to give more flexibility to application administrators.
4) It’s hard to trace changes in Enterprise Permissions.
In role based system roles-features mapping is coded in C#/Java and is stored under source control. Code changes history helps to find out when and why this role was mapped to that feature.
Not so with mapping between groups and privileges. Such mapping is stored in database and is wiped out without trace every time when administrator changes the mapping.
There are no comments on why groups-features mapping was done the way it was done. There is simply no place to put such comment (unlike roles-features mapping that can be commented in C#/Java code).

Conclusion


The most robust way of handing permissions in most of applications is with role-based permissions (Users-Roles-Features).
In spite of “flexibility” appeal of enterprise-based permissions (Users-Groups-Privileges-Features), such system has serious disadvantages and virtually no real advantages.

Thursday, September 23, 2010

How to set executionTimeout for individual requests?

You probably know that you can change http request processing timeout for specific page like this:
 <location path="MyLongRunningHttpHandler.ashx">
<system.web>
<httpRuntime executionTimeout="600" />
</system.web>
</location>
But what if you want to set it up for a control or just a function and do not have predefined list of pages to specify it in web.config?
Of maybe you don't want to pollute web.config with junk like that?

There should be some way to do it in C# code, right?
Right.
Here's how you do it:
    HttpContext.Current.Server.ScriptTimeout = 600; // 10 minutes
If that's what you were looking for, you probably want to test it.
I tried to test it too, and it turned out to be tricky.

First I set web.config's timeout to 2 seconds:
<httpRuntime executionTimeout="2" />

Then I put 10 seconds delay to my ashx handler's code-behind:
System.Threading.Thread.Sleep(10000); // 10 seconds

Then I commented this line:
// HttpContext.Current.Server.ScriptTimeout = 600; // 10 minutes

and opened my ashx handler's url in browser.

I expected it to crash with timeout error... but it did not happen.
:-O
Few experiments showed that executionTimeout works only if all of the following is true:
1) Domain name is not localhost (to test timeout you should use "YourComputerName" instead of "localhost").
2) Project is compiled in Release mode.
3) <compilation debug="false">
If any of the above is not true then executionTimeout length is virtually unlimited.
On top of that IIS typically times out later than executionTimeout limit asks it too.
When I set executionTimeout=2 and made my page request to sleep for 10 seconds, I was getting "Request timed out." response only in ~40% of requests.

Monday, September 13, 2010

Sneaky MaxItemsInObjectGraph attribute in WCF

I spent almost couple of days trying to figure out what caused WCF service to crash (in a weird way) when it was tried to return large resultset.
Initially the problem expressed itself on the WCF client side. When number of records in returning results was close to 5000 – WCF client generated
Meaningless "An existing connection was forcibly closed by the remote host." exception.
Google search for '"An existing connection was forcibly closed by the remote host." WCF size' brought up
WCF issues sending large data forum discussion.
The right answer (maxItemsInObjectGraph) was mentioned there, but it was buried under pile of misleading suggestions.

One step toward the solution was to use soapUI utility to make the requests (instead of calling WCF service from another .NET client).
That helped to determine that the problem is on the WCF server side -- soapUI simply couldn't get any response (when number of returning dataset rows was ~5000+).

What really helped to find the final answer -– was enabling WCF diagnostic by adding this to web.config on server side:
<system.diagnostics>
<sources>
<source name="System.ServiceModel" switchValue="Warning, ActivityTracing"
propagateActivity="true">
<listeners>
<add type="System.Diagnostics.DefaultTraceListener" name="Default">
<filter type="" />
</add>
<add name="ServiceModelTraceListener">
<filter type="" />
</add>
</listeners>
</source>
</sources>
<sharedListeners>
<add initializeData="app_tracelog.svclog"
type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0,
Culture=neutral, PublicKeyToken=b77a5c561934e089"
name="ServiceModelTraceListener" traceOutputOptions="Timestamp">
<filter type="" />
</add>
</sharedListeners>
</system.diagnostics>
Then app_tracelog.svclog revealed much more specific error message:
---
Maximum number of items that can be serialized or deserialized in an object graph is '65536'. Change the object graph or increase the MaxItemsInObjectGraph quota. '
---
Quick googling for "maxItemsInObjectGraph" brought me to MaxItemsInObjectGraph and keeping references when serializing in WCF blog post which recommended to add the following section to WCF server web.config:
<behaviors>
<serviceBehaviors>
<behavior name="LargeServiceBehavior">
<dataContractSerializer maxItemsInObjectGraph="100000"/>
</behavior>
</serviceBehaviors>
</behaviors>
and this section to WCF client web.config:
<behaviors>
<endpointBehaviors>
<behavior name="LargeEndpointBehavior">
<dataContractSerializer maxItemsInObjectGraph="100000"/>
</behavior>
</endpointBehaviors>
</behaviors>
That worked.
I used VS.NET 2008 / .NET Framework 3.5, but I think that is applicable to .NET 4.0 too.

Enjoy.

Thursday, June 03, 2010

How to preserve data during processing HttpRequest in ASP.NET?

The answer to it is HttpContext.Current.Items

I'm planning to use HttpContext.Current.Items for preserving pointers to controls that would render final javascript on the pages.
Or may be preserve javascript itself and then make these controls to retrieve the javascript from HttpContext.Current.Items["BottomScriptBuilder"] and HttpContext.Current.Items["HeadScriptBuilder"]?


System.Web.HttpContext.Current.Items is actually pretty old thing in ASP.NET

Sunday, May 16, 2010

How to include JavaScript tracking code into head control in asp.net

Google Analytics team recommends to include Asynchronous Google Analytics tracking javascript code right before closing </head> tag.
So, how to include that script into every page on your web site without modifying every page?

I considered multiple solutions:

1) Cut&paste into every page (the worst).

2) Create HeadScriptControl.cs server control and cut&paste it into every page (slightly better, but still requires lots of cut&paste).
Here's an example of HeadScriptControl code:
using System;
using System.Web.UI;
using System.Text;

public class HeadScriptControl : Control
{
private const string GoogleAnalyticsFirtsPartScript = @"
<script type=""text/javascript"">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-XXXXXX-2']);
_gaq.push(['_trackPageview']);
</script>
";
protected override void Render(HtmlTextWriter writer)
{
writer.Write(GoogleAnalyticsFirtsPartScript);
}
}

3) Create PageWithHeadScript.cs, inherit it from Page control, inherit every page on your web site from PageWithHeadScript.cs and render HeadScriptControl in PageWithHeadScript.cs like this:
public class PageWithHeadScript : Page
{
protected override void Render(HtmlTextWriter writer)
{
this.Header.Controls.Add(new HeadScriptControl());
base.Render(writer);
}
}
That approach requires even less cut&paste, but still every page on your web site needs to be touched.

4) Using HttpHandler.
I think that's the best approach, because it's not necessary to modify pages at all.
Here’s how I do it.

4.1. Create HttpModule:
public sealed class MyHttpModule : IHttpModule
{
void IHttpModule.Init(HttpApplication application)
{
application.PreRequestHandlerExecute += new EventHandler(application_PreRequestHandlerExecute);
}

void application_PreRequestHandlerExecute(object sender, EventArgs e)
{
RegisterPagePreRenderEventHandler();
}
private void RegisterPagePreRenderEventHandler()
{
if (HttpContext.Current.Handler.GetType().ToString().EndsWith("_aspx"))
{ // Register PreRender handler only on aspx pages.
Page page = (Page)HttpContext.Current.Handler;
page.PreRender += new EventHandler(page_PreRender);
}
}

void page_PreRender(object sender, EventArgs e)
{
System.Web.UI.Page page = (Page)sender;
page.Header.Controls.Add(new PostJobFree.WebUI.Controls.HeadScriptControl());
}

void Application_BeginRequest(object sender, EventArgs e)
{
DosAttackAnalyzer.AnalyzeHttpRequest(Ijs.Misc.BrowserIPAddress);
// DenyIpAddress.DenyAccessToCurrentIpAddressIfBlacklisted();
}
void IHttpModule.Dispose()
{
}
}

4.2. Register HttpModule in web.config:
<?xml version="1.0"?>
<configuration>
<system.webServer>
<modules>
<remove name="DenyIpAddressModule"/>
</modules>
</system.webServer>
</configuration>
</pre>

Note that it's impossible to add controls into page straight in PreRequestHandlerExecute event handler.
That's why I subscribe for Page.PreRender event.

Please let me know what you think.

Sunday, April 11, 2010

Cast from float to double error

I was surprised to find out that this line of code fails:

Assert.AreEqual(0.1d, (double)0.1f);

(double)0.1f is actually the same as ... drumbeat ... 0.100000001490116

Why does it happen?
Because float and double do NOT represent fractions precisely.
Rounding is always going on.

How to deal with this?
There are several options:
1) Round double after converting float number into it.
For example:

float f = 0.1f;
double d = Math.Round(f, 8);
would assign 0.1d to d.

2) Don't use float/Single and use double only.

double d = 0.1;
would assign exactly 0.1d to d.

3) Use decimal or bigdecimal if you are dealing with money.

decimal m = 0.1m

Hope it helps.

Saturday, September 26, 2009

Unable to obtain public key for StrongNameKeyPair

You are using Visual Studio 2008 (SP1 or not -- doesn't matter).
You have test project that is signed with strong key and that strong key has a password on it.
You are trying to create a unit test for private method and are trying to create accessor.
You have an error like that:
"Creation of the private accessor for Xyz failed"

You have that error because you excluded "Test References" folder and YourProject.accessor file from your test project.

The reason why you excluded YourProject.accessor file from your project is that you were getting a compilation error:
"Unable to obtain public key for StrongNameKeyPair".

The reason why you are getting that error is that VS 2008 SP1 and even VS 2010 Beta 1 have a bug.

Workaround for that bug is to turn off strong key signing on your test project.

Read more here:
Unable to obtain public key for StrongNameKeyPair

Friday, May 29, 2009

SQL_Latin1_General_Cp850_BIN

While working on Auto-Moderator for PostJobFree.com I encountered the following problem: my C# code considered ‘3’ and ‘3’ as different words, but SQL Server considered them the same.
That expressed itself in the following error:
Cannot insert duplicate key row in object 'dbo.Word' with unique index 'IX_Word'.

That was a little bit surprising, considering that I defined Word as Unicode column (nvarchar).

While searching for the solution, my first thought was to make 'IX_Word' index not unique. That worked, but would have introduced other problems with spam filtering business logic down the road.

The solution should have been about making SQL Server to compare strings exactly the same way C# code does.

I started to look into SQL Server Collations, and finally found the solution: use SQL_Latin1_General_Cp850_BIN collation.

Basically the solution is about to declaring 'Word' column with SQL_Latin1_General_Cp850_BIN collation:
Create Table Word(
WordId bigint identity(1,1) not null,
Word nvarchar(50) COLLATE SQL_Latin1_General_Cp850_BIN not null,
JobPostCount int not null DEFAULT 0,
JobLogSpamCount int not null DEFAULT 0,
CreateDate datetime not null,
UpdateDate datetime not null,
CONSTRAINT PK_Word Primary Key Clustered
(
WordId ASC
)
)
GO

Create Unique Index IX_Word ON Word
(
Word
)
GO

Possible drawbacks of the solution: using SQL_Latin1_General_Cp850_BIN collation may cause weird sorting in SQL queries, but sorting collation can be easily redefined like this:
select * from Word
order by Word COLLATE SQL_Latin1_General_CP1_CI_AS


Moreover, the sorting it provided by binary collation (SQL_Latin1_General_Cp850_BIN) looks quite reasonable.

You may also use SQL_Latin1_General_Cp850_BIN2 collation for better sorting.


Here’s SQL sample to you to play with:
--drop table t;
select N'3' as Word
into t;

insert into t
select '3' as Word;

select * from t
where Word = N'3';

select * from t
where Word = N'3'

select * from t
where Word = N'3' collate SQL_Latin1_General_Cp850_BIN;

select * from t
where Word = N'3' collate SQL_Latin1_General_Cp850_BIN

Enjoy!

Tuesday, March 03, 2009

Serialize and Deserialize objects in .NET

I'm not sure why XML standard doesn't allow certain characters to be encoded into XML... and it causes problems.

This C# code:

XmlSerializer xs = new XmlSerializer(typeof(T));
using (MemoryStream memoryStream = new MemoryStream(StringToUTF8ByteArray(objString)))
{
obj = xs.Deserialize(memoryStream);
}

crashes with exception:
System.InvalidOperationException: There is an error in XML document (1, 50). ---> System.Xml.XmlException: ' ', hexadecimal value 0x0C, is an invalid character. Line 1, position 50.

Here's the fix and the fully working version (note that XmlTextReader is used in between MemoryStream and XmlSerializer:


[TestMethod()]
public void SerializeDeserializeObjectTest()
{
SerializeDeserializeObjectTest("test");
SerializeDeserializeObjectTest("\f");
}

private void SerializeDeserializeObjectTest(string input)
{
string serialized = Serializer.SerializeObject(input);
string deserialized = Serializer.DeserializeObject<string>(serialized);
Assert.AreEqual(input, deserialized, input);
}


public static class Serializer
{
public static string SerializeObject(Object obj)
{
MemoryStream memoryStream = new MemoryStream();
XmlSerializer xs = new XmlSerializer(obj.GetType());
XmlTextWriter xmlTextWriter = new XmlTextWriter(memoryStream, Encoding.UTF8);
xs.Serialize(xmlTextWriter, obj);
memoryStream = (MemoryStream)xmlTextWriter.BaseStream;
return UTF8ByteArrayToString(memoryStream.ToArray());
}

public static T DeserializeObject<T>(string objString)
{
Object obj = null;
XmlSerializer xs = new XmlSerializer(typeof(T));
using (MemoryStream memoryStream = new MemoryStream(StringToUTF8ByteArray(objString)))
{
XmlTextReader xtr = new XmlTextReader(memoryStream);
obj = xs.Deserialize(xtr);
}
return (T)obj;
}

private static string UTF8ByteArrayToString(byte[] characters)
{
UTF8Encoding encoding = new UTF8Encoding();
return encoding.GetString(characters);
}

private static byte[] StringToUTF8ByteArray(string xmlString)
{
UTF8Encoding encoding = new UTF8Encoding();
return encoding.GetBytes(xmlString);
}

}



Thanks to Tom Goff for XML Serialization Sorrows article.

Thanks to Andrew Gunn for XML Serialization in C# article.

Wednesday, March 26, 2008

How to encrypt and decrypt string in .NET application

using System;
using System.IO;
using System.Security.Cryptography;
using System.Text;

namespace MyCompany.Library
{
public class Encryption
{
public static string EncryptString(string valueToEncrypt)
{
return EncryptString(valueToEncrypt, "My password key");
}

public static string DecryptString(string valueToDecrypt)
{
return DecryptString(valueToDecrypt, "My password key");
}

public static string EncryptString(string valueToEncrypt, string secretPhrase)
{
CryptoStream encryptStream = null; // Stream used to encrypt
RijndaelManaged rijndael = null; // Rijndael provider
ICryptoTransform rijndaelEncrypt = null; // Encrypting object
MemoryStream memStream = new MemoryStream(); // Stream to contain data
byte[] key;
byte[] IV;
GenerateKey(secretPhrase, out key, out IV);
try
{
if( valueToEncrypt.Length > 0 )
{
// Create the crypto objects
rijndael = new RijndaelManaged();
rijndael.Key = key;
rijndael.IV = IV;
rijndaelEncrypt = rijndael.CreateEncryptor();
encryptStream = new CryptoStream(
memStream, rijndaelEncrypt, CryptoStreamMode.Write);

// Write the encrypted value into memory
byte[] input = Encoding.UTF8.GetBytes(valueToEncrypt);
encryptStream.Write(input, 0, input.Length);
encryptStream.FlushFinalBlock();

// Retrieve the encrypted value and return it
return Convert.ToBase64String(memStream.ToArray());
}
else
{
return "";
}
}
finally
{
if (rijndael != null) rijndael.Clear();
if (rijndaelEncrypt != null) rijndaelEncrypt.Dispose();
if (memStream != null) memStream.Close();
}
}

public static string DecryptString(string valueToDecrypt, string secretPhrase)
{
CryptoStream decryptStream = null; // Stream used to decrypt
RijndaelManaged rijndael = null; // Rijndael provider
ICryptoTransform rijndaelDecrypt = null; // decrypting object
MemoryStream memStream = null; // Stream to contain data
byte[] key;
byte[] IV;
GenerateKey(secretPhrase, out key, out IV);
try
{
if( valueToDecrypt.Length > 0 )
{
// Create the crypto objects
rijndael = new RijndaelManaged();
rijndael.Key = key;
rijndael.IV = IV;

//Now decrypt the previously encrypted message using the decryptor
// obtained in the above step.

// Write the encrypted value into memory
byte[] encrypted = Convert.FromBase64String(valueToDecrypt);
memStream = new MemoryStream(encrypted);


rijndaelDecrypt = rijndael.CreateDecryptor();
decryptStream = new CryptoStream(memStream, rijndaelDecrypt, CryptoStreamMode.Read);

byte[] fromEncrypt = new byte[encrypted.Length];

//Read the data out of the crypto stream.
decryptStream.Read(fromEncrypt, 0, fromEncrypt.Length);

// Retrieve the encrypted value and return it
string decryptedString = new string(Encoding.UTF8.GetChars(fromEncrypt));
return decryptedString.TrimEnd(new char[] {'\0'});
}
else
{
return "";
}
}
finally
{
if (rijndael != null) rijndael.Clear();
if (rijndaelDecrypt != null) rijndaelDecrypt.Dispose();
if (memStream != null) memStream.Close();
}
}

/// Generates an encryption key based on the given phrase. The
/// phrase is hashed to create a unique 32 character (256-bit)
/// value, of which 24 characters (192 bit) are used for the
/// key and the remaining 8 are used for the initialization vector (IV).
private static void GenerateKey(string secretPhrase, out byte[] key, out byte[] IV)
{
// Initialize internal values
key = new byte[24];
IV = new byte[16];

// Perform a hash operation using the phrase. This will
// generate a unique 32 character value to be used as the key.
byte[] bytePhrase = Encoding.ASCII.GetBytes(secretPhrase);
SHA384Managed sha384 = new SHA384Managed();
sha384.ComputeHash(bytePhrase);
byte[] result = sha384.Hash;

// Transfer the first 24 characters of the hashed value to the key
// and the remaining 8 characters to the intialization vector.
for (int index=0; index<24; index++) key[index] = result[index];
for (int index=24; index<40; index++) IV[index-24] = result[index];
}
}
}

Tuesday, March 11, 2008

Generate Sequential GUIDs for SQL Server 2005 in C#

Why generate sequential GUID in C#?

Originally, uniqueidentifier (GUID) column in SQL Server was not supposed to be sequential. But in my case, having sequential GUID is quite useful.
My application needs to know, what record was inserted first.

Fortunately, SQL Server 2005 supports "default newsequentialid()" constraint, that makes uniqueidentifier column grow sequentially [with every inserted record].

That worked quite well for me, until I decided to generate sequential GUID in C#.
(I needed it, because I use SqlBulkCopy and try to save two tables that share the same generated GUID key).

That turned out to be a tricky task. The reason -- .NET and SQL Server treat GUIDs quite different. In particular, they sort them quite differently.

Searching for solution

Alberto Ferrari's post How are GUIDs sorted by SQL Server? gave me a good idea about how to handle the problem.

I used modified Alberto's SQL code to find out what C#.NET GUID bytes are more [or less] significant from SQL Server 2005 ORDER BY clause perspective.

With UIDs As (
Select ID = 3, UID = cast ('01000000-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 2, UID = cast ('00010000-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 1, UID = cast ('00000100-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 0, UID = cast ('00000001-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 5, UID = cast ('00000000-0100-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 4, UID = cast ('00000000-0001-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 7, UID = cast ('00000000-0000-0100-0000-000000000000' as uniqueidentifier)
Union Select ID = 6, UID = cast ('00000000-0000-0001-0000-000000000000' as uniqueidentifier)
Union Select ID = 8, UID = cast ('00000000-0000-0000-0100-000000000000' as uniqueidentifier)
Union Select ID = 9, UID = cast ('00000000-0000-0000-0001-000000000000' as uniqueidentifier)
Union Select ID = 10, UID = cast ('00000000-0000-0000-0000-010000000000' as uniqueidentifier)
Union Select ID = 11, UID = cast ('00000000-0000-0000-0000-000100000000' as uniqueidentifier)
Union Select ID = 12, UID = cast ('00000000-0000-0000-0000-000001000000' as uniqueidentifier)
Union Select ID = 13, UID = cast ('00000000-0000-0000-0000-000000010000' as uniqueidentifier)
Union Select ID = 14, UID = cast ('00000000-0000-0000-0000-000000000100' as uniqueidentifier)
Union Select ID = 15, UID = cast ('00000000-0000-0000-0000-000000000001' as uniqueidentifier)
)
Select * From UIDs Order By UID


Note, that first line with ID=3 corresponds to:
new Guid(new bytes[16]{ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
That means:
(new Guid("01000000-0000-0000-0000-000000000000").ToByteArray()[3] == 1)


Now, when I run modified Alberto's query, I'm getting the following sequence:
3, 2, 1, 0, 5, 4, 7, 6, 9, 8, 15, 14, 13, 12, 11, 10

That means, that GUID's byte #3 is the least significant and GUID's byte #10 is the most significant [from SQL Server ORDER BY clause perspective].

Final solution

Now we're ready to write C# code, that would sequentially increment any given GUID.
I also made it more convenient to increment GUID, by using "++" operator.
Here's how it's used:

private void Test()
{
SequentialGuid = new SequentialGuid(Guid.Empty);
SequentialGuid++;
}


and C# code that increments GUIDs sequentially:

public class SequentialGuid
{
Guid _CurrentGuid;
public Guid CurrentGuid
{
get { return _CurrentGuid; }
}

public SequentialGuid()
{
_CurrentGuid = Guid.NewGuid();
}

public SequentialGuid(Guid previousGuid)
{
_CurrentGuid = previousGuid;
}

public static SequentialGuid operator ++(SequentialGuid sequentialGuid)
{
byte[] bytes = sequentialGuid._CurrentGuid.ToByteArray();
for (int mapIndex = 0; mapIndex < 16; mapIndex++)
{
int bytesIndex = SqlOrderMap[mapIndex];
bytes[bytesIndex]++;
if (bytes[bytesIndex] != 0)
{
break; // No need to increment more significant bytes
}
}
sequentialGuid._CurrentGuid = new Guid(bytes);
return sequentialGuid;
}

private static int[] _SqlOrderMap = null;
private static int[] SqlOrderMap
{
get
{
if (_SqlOrderMap == null)
{
_SqlOrderMap = new int[16] { 3, 2, 1, 0, 5, 4, 7, 6, 9, 8, 15, 14, 13, 12, 11, 10 };
// 3 - the least significant byte in Guid ByteArray [for SQL Server ORDER BY clause]
// 10 - the most significant byte in Guid ByteArray [for SQL Server ORDERY BY clause]
}
return _SqlOrderMap;
}
}
}

Followers

About Me

My photo
Email me: blog@postjobfree.com