SmallNetBuilder Forums
Go Back   SmallNetBuilder Forums > NAS > NAS Article Discussions

Reply
 
Thread Tools Search this Thread Display Modes
  #21  
Old 06-20-2011, 06:54 PM
GregN's Avatar
GregN GregN is offline
Very Senior Member
 
Join Date: Feb 2011
Posts: 478
Thanks: 34
Thanked 54 Times in 53 Posts
GregN is just starting out
Default

Quote:
Originally Posted by Ch'tio (france) View Post
Thanks a lot for this impressive How To !

Probably stupid, but I was wondering if it was possible to access Old Shuck over FC from your Windows station, and over ethernet from other devices? Just to have BlackDog possible to power off.
You can access Old Shuck through the management channel, HTTPS over tcp/ip, even with the DAS server down ( BlackDog ). You can only access the actual FC shared storage with the DAS server up. Ol'Shuck doesn't know about the filesystems on the disks, only your DAS server does.

This isn't easy to solve, for the storage to be available without the DAS server, you have to multipath and used a shared filesystem (NTFS is not a shared FS). Take a look at some of the other posts about approaches for that.
Reply With Quote
  #22  
Old 06-22-2011, 08:43 AM
Ch'tio (france)
Guest
 
Posts: n/a
Default

Quote:
Originally Posted by GregN View Post
You can access Old Shuck through the management channel, HTTPS over tcp/ip, even with the DAS server down ( BlackDog ). You can only access the actual FC shared storage with the DAS server up. Ol'Shuck doesn't know about the filesystems on the disks, only your DAS server does.

This isn't easy to solve, for the storage to be available without the DAS server, you have to multipath and used a shared filesystem (NTFS is not a shared FS). Take a look at some of the other posts about approaches for that.
Thank you, I did not catch the shared FS subtlety at first. I'm waiting forward from reading the third part!
Reply With Quote
  #23  
Old 10-03-2011, 10:16 AM
Unregistered
Guest
 
Posts: n/a
Default How to use Win 2008 instead of OpenFiler for target?

I have pretty much the exact same setup (except I scored 4 port Qlogic cards instead) and am trying to setup my Win 2008 server to export it's RAID over the Qlogic cards and haven't yet figured out how to have this exact same point to point setup but using 2 windows boxes.

Any pointers on how to replace the OpenFiler box with a Win 2008 box instead?

Also, since I have a few 4 port cards, and can set up 4 point to point connections (1 server, 3 workstations including a Mac), what is the best way of sharing the disk then? I realize this gets into clustered filesystem territory, but I can't find an open source FS that works between windows (and ofcourse I'm adding a Mac to this equation). Commercial filesystems are out of my budget for the time being.

Ay way I could just do filesharing over IP on these point to point fibre connections?

Any pointers greatly appreciated!
Reply With Quote
  #24  
Old 10-03-2011, 11:37 AM
GregN's Avatar
GregN GregN is offline
Very Senior Member
 
Join Date: Feb 2011
Posts: 478
Thanks: 34
Thanked 54 Times in 53 Posts
GregN is just starting out
Default

Not a windows expert, but to do sharing or multi-pathing, I'm pretty sure you have to be running Hyper-V and Server 2008 R2. I think this is primarily for failover between VMs.

There is a technet article about using CSV, Clustered Shared Volumes, and how to set them up. It is not at all clear. The advantage appears as though the overlying filesystem is NTFS, so nothing needs to be done on the client nodes.
Reply With Quote
  #25  
Old 10-04-2011, 05:58 AM
Unregistered
Guest
 
Posts: n/a
Default

I've skimmed over Windows CSV in the Hyper-V stuff and it's not applicable to what I'm doing. It seems to be just for failover not for shared access.

I want multiple Windows (and a Mac) boxes to simultaneously mount/read/write a drive from a Win 2008 server via the point to point setup done in this tutorial.

I'll settle for just 1 workstation and 1 server in windows right now and worry about the shared filesystem later once I figure out how to get a fibre point to point between win boxen.

But for now, how do I get a win 2008 server to do what the author is doing with the openfiler setup? I have the same hardware (just more ports)
Reply With Quote
  #26  
Old 10-04-2011, 07:57 PM
00Roush 00Roush is offline
Very Senior Member
 
Join Date: Aug 2008
Posts: 520
Thanks: 1
Thanked 12 Times in 11 Posts
00Roush is just starting out
Default

Quote:
Originally Posted by Unregistered View Post
I've skimmed over Windows CSV in the Hyper-V stuff and it's not applicable to what I'm doing. It seems to be just for failover not for shared access.

I want multiple Windows (and a Mac) boxes to simultaneously mount/read/write a drive from a Win 2008 server via the point to point setup done in this tutorial.

I'll settle for just 1 workstation and 1 server in windows right now and worry about the shared filesystem later once I figure out how to get a fibre point to point between win boxen.

But for now, how do I get a win 2008 server to do what the author is doing with the openfiler setup? I have the same hardware (just more ports)
I think you would need to start out exactly the same as Greg did in his article. Download the SANSurfer software from Qlogic's website and setup your client in the same fashion described in the article.

For the server side you might take a look at Microsoft's TechNet Library. I did a quick check on my server (Win2008 Server R2) and all you need to install is the Storage Manger for SANs feature. I think from there you just need to install the Qlogic card in your server and maybe download drivers/software from Qlogic. Once that is done you should be able to open up the Storage Manger for SAN program and it will automatically find your Fibre Channel setup and allow you to assign disks to it. But this is just a guess based on the little bit of research I did.

Hope that helps.

00Roush
Reply With Quote
  #27  
Old 10-05-2011, 03:49 AM
GregN's Avatar
GregN GregN is offline
Very Senior Member
 
Join Date: Feb 2011
Posts: 478
Thanks: 34
Thanked 54 Times in 53 Posts
GregN is just starting out
Default

Roush pretty much nailed the approach, low risk all the way.


There appears to be two approaches to sharing LUNs across multiple machines (Caveat: I have not done this, and do not claim to be an expert, but this would be my approach):

HyperV with Windows 2008 running CSV:

As with all Microsoft kitchen sink tech, this appears to quote Amadeus, too many notes, all the pieces have to be in place, and you have to identify the machines that are going to be participating.

From the link I provided, you'll see under number four:

Quote:
CSV allows every cluster node to access the disk concurrently. This is accomplished by creating a common namespace under %SystemDrive%\ClusterStorage. For this reason, it is necessary to have the OS on the same drive letter on every node in the cluster (such as C:\, which will be used in this blog). You will see the same directory from every node in the cluster and this is the way to access CSV disks.
To run CSV you have to be running Server 2008, HyperV, and set up clustering.

VMware running ESXi under Server 2008, using VMFS:


The other approach is the one mentioned earlier in this thread, run ESXi under Windows Server, this will give you VMFS, and NTFS can run on top of that.


In either case you need a Hypervisor to coordinate block access to the disks.

I am VERY interested in your attempts, what you learn, please keep us apprised. I'll help where I can.
Reply With Quote
  #28  
Old 10-06-2011, 02:34 PM
Unregistered
Guest
 
Posts: n/a
Default

Great series, Greg. Any idea when Part 3 will be ready? I'm anxiously awaiting your update.

Do you think you might consider adding a used fiber channel switch and doing some multipathing? From the very little I've ready, it seems SCST is fairly straight forward.

My goal would be to have the SAN with two client nodes, each with two paths to the SAN. I could probably skip the fiber switch and just get an extra card or two in the SAN.
Reply With Quote
  #29  
Old 11-10-2011, 07:00 PM
conixit conixit is offline
New Member
 
Join Date: Nov 2011
Posts: 1
Thanks: 0
Thanked 0 Times in 0 Posts
conixit is just starting out
Default

Could you tell me what the difference between a Storage appliance like this (DDN S2A6620) and building out this fibre channel SAN? It looks like these products are closer to a very large NAS rather than a SAN. I don't see a motherboard, CPU(s) or memory.
Reply With Quote
  #30  
Old 11-11-2011, 04:26 AM
GregN's Avatar
GregN GregN is offline
Very Senior Member
 
Join Date: Feb 2011
Posts: 478
Thanks: 34
Thanked 54 Times in 53 Posts
GregN is just starting out
Default

Quote:
Originally Posted by conixit View Post
Could you tell me what the difference between a Storage appliance like this (DDN S2A6620) and building out this fibre channel SAN? It looks like these products are closer to a very large NAS rather than a SAN. I don't see a motherboard, CPU(s) or memory.
The definition difference is a SAN provides storage at the block level, a NAS at the file level. According to the spec sheet for the S2A6620, it provides multiple 8 Gig fibre channel interface points.

It also seems to have a managed cache of 12Gb, and is highly optimized for disk I/O.

Ol'Shuck is a volkswagon next to that Mercedes - but are more alike than different.
Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 07:01 AM.

Top 10 Stats
Top Posters* Top Thanked
RMerlin  401
stevech  156
thelonelycode...  148
L&LD  126
azazel1024  119
KGB7  86
jim769  80
DrTeeth  69
Adamm  64
speedingcheet...  61
RMerlin  4487
stevech  275
ryzhov_al  199
TeHashX  194
RogerSC  166
L&LD  163
joegreat  105
jlake  102
PrivateJoker  93
sinshiva  88
Most Viewed Threads* Hottest Threads*
Old Asuswrt-Merli...  31950
Old Asuswrt-Merli...  24137
Old ASUS...  22772
Old Beta Version...  17535
Old 'Heartbleed'...  14322
Old Linksys...  9438
Old Potential...  7940
Old Asus-Merlin...  7112
Old Groundhog...  6430
Old Linksys...  5764
Old Asuswrt-Merli...  220
Old Asuswrt-Merli...  179
Old Linksys...  144
Old ASUS...  125
Old Beta Version...  112
Old Potential...  98
Old Linksys...  79
Old Groundhog...  49
Old Asus router...  43
Old USB drive...  39



Powered by vBulletin® Version 3.7.3
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2006-2014 Pudai LLC All Rights Reserved.