2006-04-17 18:16:51 UTC
web transactions. These web transactions are used to receive and store in
small files observations from a large telemetry network (I want to use NLB
to improve the availability of our current single server environment). I
have other applications which use web transactions to read the data that has
been stored, so I need the web servers int the NLB cluster to access the
same file store.
I am planning on addressing this by pointing the home directories of the web
servers at the same shared directory on a file server which is accessible to
both web servers. Ideally, the file server should be based on a shared-SCSI
cluster server, but my budget doesn't stretch to a full cluster config. In
looking for a cheaper alternatives to address failures of the file server, I
was looking at using DFS and building a replica set of the directory shared
by the web server. I would point the web servers at the DFS link to allow
them to home to primary or replicated directories without having to make
config changes to all the web servers if there is a failure of the web
server. I know there is a delay in the replication of the files across the
DFS set and I can live with this if I only have to devolve to the replica
set if there is a file server failure.
My question is whether there is any way to make the web servers always use
the DFS master rather than the replica set (unless, of course, the master is
down)? I was thinking of putting the DFS replica set back on one of the web
servers and I am worried that the web server on that machine would use the
"delayed" replica copy on that machine rather than the master on the shared
file server (because it was local and therefore "closer").
Thanks in advance for any assistance.