Discussion:
NLB web servers sharing a common file store and DFS
(too old to reply)
Ken L
2006-04-17 18:16:51 UTC
Permalink
I have a pair of servers I want to set up in a NLB configuration to handle
web transactions. These web transactions are used to receive and store in
small files observations from a large telemetry network (I want to use NLB
to improve the availability of our current single server environment). I
have other applications which use web transactions to read the data that has
been stored, so I need the web servers int the NLB cluster to access the
same file store.

I am planning on addressing this by pointing the home directories of the web
servers at the same shared directory on a file server which is accessible to
both web servers. Ideally, the file server should be based on a shared-SCSI
cluster server, but my budget doesn't stretch to a full cluster config. In
looking for a cheaper alternatives to address failures of the file server, I
was looking at using DFS and building a replica set of the directory shared
by the web server. I would point the web servers at the DFS link to allow
them to home to primary or replicated directories without having to make
config changes to all the web servers if there is a failure of the web
server. I know there is a delay in the replication of the files across the
DFS set and I can live with this if I only have to devolve to the replica
set if there is a file server failure.

My question is whether there is any way to make the web servers always use
the DFS master rather than the replica set (unless, of course, the master is
down)? I was thinking of putting the DFS replica set back on one of the web
servers and I am worried that the web server on that machine would use the
"delayed" replica copy on that machine rather than the master on the shared
file server (because it was local and therefore "closer").

Thanks in advance for any assistance.

Ken
bkmonroe
2006-04-19 14:50:01 UTC
Permalink
Ken,

We are in a similar situation. Currently we have a NLB web server cluster
that points home directories to a 2 node MS fileserver cluster. We have been
happy with this setup for over a year now, except when our fileserver cluster
dies so dies our web server along with it.

In an attempt to break the dependency we are looking into storing the data
files for each web server locally and then doing DFS replication between the
two. The problem is exactly how you state it. There is a lag (albeit short)
between the replication of file changes. Database file changes are
especially concerning. If we could point the home directories to a DFS share
and if the master was the same on each web server then at least database
files could lock open when there are attempts at simultaneous writes.
Without having each web server point to the same store, we run the chance of
simulateneous writes and the last write "winning" thus losing the first
session's data.

I would appreciate it if you would post any solution you find to this problem.

-Brian
Post by Ken L
I have a pair of servers I want to set up in a NLB configuration to handle
web transactions. These web transactions are used to receive and store in
small files observations from a large telemetry network (I want to use NLB
to improve the availability of our current single server environment). I
have other applications which use web transactions to read the data that has
been stored, so I need the web servers int the NLB cluster to access the
same file store.
I am planning on addressing this by pointing the home directories of the web
servers at the same shared directory on a file server which is accessible to
both web servers. Ideally, the file server should be based on a shared-SCSI
cluster server, but my budget doesn't stretch to a full cluster config. In
looking for a cheaper alternatives to address failures of the file server, I
was looking at using DFS and building a replica set of the directory shared
by the web server. I would point the web servers at the DFS link to allow
them to home to primary or replicated directories without having to make
config changes to all the web servers if there is a failure of the web
server. I know there is a delay in the replication of the files across the
DFS set and I can live with this if I only have to devolve to the replica
set if there is a file server failure.
My question is whether there is any way to make the web servers always use
the DFS master rather than the replica set (unless, of course, the master is
down)? I was thinking of putting the DFS replica set back on one of the web
servers and I am worried that the web server on that machine would use the
"delayed" replica copy on that machine rather than the master on the shared
file server (because it was local and therefore "closer").
Thanks in advance for any assistance.
Ken
Russ Kaufmann [MVP]
2006-04-19 16:13:31 UTC
Permalink
Post by bkmonroe
Ken,
We are in a similar situation. Currently we have a NLB web server cluster
that points home directories to a 2 node MS fileserver cluster. We have been
happy with this setup for over a year now, except when our fileserver cluster
dies so dies our web server along with it.
What is happening to bring down your entire file server cluster? Isn't one
of the reasons for using clustering to keep it available?
--
Russ Kaufmann
MVP - Windows Server - Clustering
ClusterHelp.com, a Microsoft Certified Gold Partner
Web http://www.clusterhelp.com
Blog http://msmvps.com/clusterhelp
bkmonroe
2006-04-19 16:25:03 UTC
Permalink
It was a real bear to troubleshoot. Turns out that the backup software
(ARCServe) had a flaw that caused the disk signatures to change when we
performed our monthly partial data restores from tape.

http://supportconnect.ca.com/sc/solcenter/solresults.jsp?aparno=QO75197&startsearch=1
Post by Russ Kaufmann [MVP]
Post by bkmonroe
Ken,
We are in a similar situation. Currently we have a NLB web server cluster
that points home directories to a 2 node MS fileserver cluster. We have been
happy with this setup for over a year now, except when our fileserver cluster
dies so dies our web server along with it.
What is happening to bring down your entire file server cluster? Isn't one
of the reasons for using clustering to keep it available?
--
Russ Kaufmann
MVP - Windows Server - Clustering
ClusterHelp.com, a Microsoft Certified Gold Partner
Web http://www.clusterhelp.com
Blog http://msmvps.com/clusterhelp
Russ Kaufmann [MVP]
2006-04-19 16:33:14 UTC
Permalink
Post by bkmonroe
It was a real bear to troubleshoot. Turns out that the backup software
(ARCServe) had a flaw that caused the disk signatures to change when we
performed our monthly partial data restores from tape.
http://supportconnect.ca.com/sc/solcenter/solresults.jsp?aparno=QO75197&startsearch=1
OK, so, back to my original thought, why would you want to remove
clustering? You obviously need the high availability.
--
Russ Kaufmann
MVP - Windows Server - Clustering
ClusterHelp.com, a Microsoft Certified Gold Partner
Web http://www.clusterhelp.com
Blog http://msmvps.com/clusterhelp
bkmonroe
2006-04-19 16:48:02 UTC
Permalink
Thanks for your input. We don't want to remove clustering. We want to
remove our web server's dependency on our fileserver. We actually are
debating the issue in house. My feeling is to just leave it like it is: 2
node NLB webservers pointing to 2 node MSCS fileserver for home directories.
It works beautifully, except when the fileserver cluster dies.

We are considering moving the data to the local hard drive of the NLB web
servers and using DFS replication between nodes. Is this a step in the right
direction or in the wrong?

-Brian
Post by Russ Kaufmann [MVP]
Post by bkmonroe
It was a real bear to troubleshoot. Turns out that the backup software
(ARCServe) had a flaw that caused the disk signatures to change when we
performed our monthly partial data restores from tape.
http://supportconnect.ca.com/sc/solcenter/solresults.jsp?aparno=QO75197&startsearch=1
OK, so, back to my original thought, why would you want to remove
clustering? You obviously need the high availability.
--
Russ Kaufmann
MVP - Windows Server - Clustering
ClusterHelp.com, a Microsoft Certified Gold Partner
Web http://www.clusterhelp.com
Blog http://msmvps.com/clusterhelp
Russ Kaufmann [MVP]
2006-04-19 18:16:26 UTC
Permalink
Post by bkmonroe
Thanks for your input. We don't want to remove clustering. We want to
remove our web server's dependency on our fileserver. We actually are
2
node NLB webservers pointing to 2 node MSCS fileserver for home directories.
It works beautifully, except when the fileserver cluster dies.
We are considering moving the data to the local hard drive of the NLB web
servers and using DFS replication between nodes. Is this a step in the right
direction or in the wrong?
Personally, I would stick with the file server cluster and get the backup
problem fixed. If that is not an option, then I would look at robocopy as a
solution since it can be set/configured to monitor for changes and then
replicate them.
--
Russ Kaufmann
MVP - Windows Server - Clustering
ClusterHelp.com, a Microsoft Certified Gold Partner
Web http://www.clusterhelp.com
Blog http://msmvps.com/clusterhelp
bkmonroe
2006-04-19 15:31:02 UTC
Permalink
Ken,

You can disable DFS referral on one node of the replication set. This would
force DFS queries to go to a single node in the set. It also removes the
fault tolerance.

-Brian
Post by Ken L
I have a pair of servers I want to set up in a NLB configuration to handle
web transactions. These web transactions are used to receive and store in
small files observations from a large telemetry network (I want to use NLB
to improve the availability of our current single server environment). I
have other applications which use web transactions to read the data that has
been stored, so I need the web servers int the NLB cluster to access the
same file store.
I am planning on addressing this by pointing the home directories of the web
servers at the same shared directory on a file server which is accessible to
both web servers. Ideally, the file server should be based on a shared-SCSI
cluster server, but my budget doesn't stretch to a full cluster config. In
looking for a cheaper alternatives to address failures of the file server, I
was looking at using DFS and building a replica set of the directory shared
by the web server. I would point the web servers at the DFS link to allow
them to home to primary or replicated directories without having to make
config changes to all the web servers if there is a failure of the web
server. I know there is a delay in the replication of the files across the
DFS set and I can live with this if I only have to devolve to the replica
set if there is a file server failure.
My question is whether there is any way to make the web servers always use
the DFS master rather than the replica set (unless, of course, the master is
down)? I was thinking of putting the DFS replica set back on one of the web
servers and I am worried that the web server on that machine would use the
"delayed" replica copy on that machine rather than the master on the shared
file server (because it was local and therefore "closer").
Thanks in advance for any assistance.
Ken
bkmonroe
2006-04-19 16:39:01 UTC
Permalink
I setup a test. Two NLB web servers each with a DFS share in a replication
set. On each web server I set the dfs referral path to the same node-- node
1. So, all queries to the dfs share go to node 1 and then are replicated to
node 2. Assuming the home directories point to the dfs share replication set
you have no worries of overwritting simulataneous writes to database for
example. At least no less worries than if the data were on a single hard
drive.

Bad news... when node 1 is rebooted naturally node 2 becomes the active
path to the dfs share. Which wouldn't be so bad provided that both nodes
used the same dfs path when node 1 comes back up... they don't. In the test,
after node 1 bounces node 2 points to node 1 as the active dfs path and node
1 points to node 2 for its path. This is a concern.

-Brian
Post by Ken L
I have a pair of servers I want to set up in a NLB configuration to handle
web transactions. These web transactions are used to receive and store in
small files observations from a large telemetry network (I want to use NLB
to improve the availability of our current single server environment). I
have other applications which use web transactions to read the data that has
been stored, so I need the web servers int the NLB cluster to access the
same file store.
I am planning on addressing this by pointing the home directories of the web
servers at the same shared directory on a file server which is accessible to
both web servers. Ideally, the file server should be based on a shared-SCSI
cluster server, but my budget doesn't stretch to a full cluster config. In
looking for a cheaper alternatives to address failures of the file server, I
was looking at using DFS and building a replica set of the directory shared
by the web server. I would point the web servers at the DFS link to allow
them to home to primary or replicated directories without having to make
config changes to all the web servers if there is a failure of the web
server. I know there is a delay in the replication of the files across the
DFS set and I can live with this if I only have to devolve to the replica
set if there is a file server failure.
My question is whether there is any way to make the web servers always use
the DFS master rather than the replica set (unless, of course, the master is
down)? I was thinking of putting the DFS replica set back on one of the web
servers and I am worried that the web server on that machine would use the
"delayed" replica copy on that machine rather than the master on the shared
file server (because it was local and therefore "closer").
Thanks in advance for any assistance.
Ken
Loading...