ec2 multiple remote nothing MN(max 5)

chaeplin

Active Member
Core Developer
Mar 29, 2014
749
356
133
use following guide to setup one local(cold) - remote(nothing) MN.
https://www.darkcointalk.org/threads/how-to-set-up-ec2-t1-micro-ubuntu-for-masternode-part-1-3.240/
https://www.darkcointalk.org/threads/how-to-set-up-ec2-t1-micro-ubuntu-for-masternode-part-2-3.241/
https://www.darkcointalk.org/threads/how-to-set-up-ec2-t1-micro-ubuntu-for-masternode-part-3-3.262/

Current local - remote setup is 1 local/multiple wallet + multiple NM/multiple instance.
With multiple NM in 1 instance, local - remote setup can be 1 local/multiple wallet + multiple NM/1 instance.

* This guide use ec2 one m3.medium for nothing
+ 5 private ip
+ 5 el ip(restiction : max 5 el ip per region) :
+ 5 user account
+ each user run darkcoind(bind + rpcport + externalip)

* on instance setup, if public ip is assigned automatically, no of public ip can be 6.
1 public ip + 5 el ip. So add 1 more private ip, 1 more user, 1 more port...

* iptables connlimit works well with -d ;D

* iptables "-j REJECT --reject-with tcp-reset" changed to "-j DROP"

Diffrence from one local(cold) - remote(nothing)

1) on EC2 concole
* Step 7: Review Instance Launch
--> Edit instance details ---> select subnet --> Network interfaces --> add ip ( 4 more ) --> launch

* elastic IP --> Allocate New Address --> Max 5 ---> Associate to VM interface ip --> 1:1 mapping.




2) on ec2 instance

* add 5 user
Code:
useradd -m nm01
useradd -m nm02
useradd -m nm03
useradd -m nm04
useradd -m nm05
Code:
passwd -l nm01
passwd -l nm02
passwd -l nm03
passwd -l nm04
passwd -l nm05
* add ip alias
/etc/rc.local
Code:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
#
/sbin/ifconfig eth0:1 172.31.13.142 netmask 255.255.240.0 up
/sbin/ifconfig eth0:2 172.31.13.143 netmask 255.255.240.0 up
/sbin/ifconfig eth0:3 172.31.13.144 netmask 255.255.240.0 up
/sbin/ifconfig eth0:4 172.31.13.145 netmask 255.255.240.0 up
#
/sbin/iptables-restore < /etc/iptables
#
exit 0
Code:
/et/rc.local
* add ip tables rule
* if you want to check POSTROUTING, add port 80 rule.
* curl http://ipecho.net/plain && echo
Code:
#----------
*nat
:PREROUTING ACCEPT [329861:16309264]
:POSTROUTING ACCEPT [785521:53005289]
:OUTPUT ACCEPT [785521:53005289]
#
-A POSTROUTING -m owner --uid-owner nm01 -p tcp --dport 9999 -j SNAT --to-source 172.31.13.72
-A POSTROUTING -m owner --uid-owner nm02 -p tcp --dport 9999 -j SNAT --to-source 172.31.13.145
-A POSTROUTING -m owner --uid-owner nm03 -p tcp --dport 9999 -j SNAT --to-source 172.31.13.144
-A POSTROUTING -m owner --uid-owner nm04 -p tcp --dport 9999 -j SNAT --to-source 172.31.13.143
-A POSTROUTING -m owner --uid-owner nm05 -p tcp --dport 9999 -j SNAT --to-source 172.31.13.142
#
-A POSTROUTING -m owner --uid-owner nm01 -p tcp --dport 19999 -j SNAT --to-source 172.31.13.72
-A POSTROUTING -m owner --uid-owner nm02 -p tcp --dport 19999 -j SNAT --to-source 172.31.13.145
-A POSTROUTING -m owner --uid-owner nm03 -p tcp --dport 19999 -j SNAT --to-source 172.31.13.144
-A POSTROUTING -m owner --uid-owner nm04 -p tcp --dport 19999 -j SNAT --to-source 172.31.13.143
-A POSTROUTING -m owner --uid-owner nm05 -p tcp --dport 19999 -j SNAT --to-source 172.31.13.142
#
-A POSTROUTING -m owner --uid-owner nm01 -p tcp --dport 80 -j SNAT --to-source 172.31.13.72
-A POSTROUTING -m owner --uid-owner nm02 -p tcp --dport 80 -j SNAT --to-source 172.31.13.145
-A POSTROUTING -m owner --uid-owner nm03 -p tcp --dport 80 -j SNAT --to-source 172.31.13.144
-A POSTROUTING -m owner --uid-owner nm04 -p tcp --dport 80 -j SNAT --to-source 172.31.13.143
-A POSTROUTING -m owner --uid-owner nm05 -p tcp --dport 80 -j SNAT --to-source 172.31.13.142
#
COMMIT
# Completed on Tue Apr  5 16:44:54 2011
# Generated by iptables-save v1.4.8 on Mon Oct 17 18:30:57 2011
*filter
:INPUT ACCEPT [1038:145425]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [434:87191]
#
-A INPUT -i lo -j ACCEPT
#
-A INPUT -p tcp -m tcp --dport 9998 -j REJECT --reject-with tcp-reset
-A INPUT -p tcp -m tcp --dport 9997 -j REJECT --reject-with tcp-reset
-A INPUT -p tcp -m tcp --dport 9996 -j REJECT --reject-with tcp-reset
-A INPUT -p tcp -m tcp --dport 9995 -j REJECT --reject-with tcp-reset
-A INPUT -p tcp -m tcp --dport 9994 -j REJECT --reject-with tcp-reset
#
-A INPUT -i eth0 -p tcp -m tcp -d 172.31.13.72 --dport 9999 --tcp-flags FIN,SYN,RST,ACK SYN -m connlimit --connlimit-above 8 --connlimit-mask 24 --connlimit-saddr -j DROP
-A INPUT -i eth0 -p tcp -m tcp -d 172.31.13.72 --dport 9999 --tcp-flags FIN,SYN,RST,ACK SYN -m connlimit --connlimit-above 2 --connlimit-mask 32 --connlimit-saddr -j DROP
-A INPUT -i eth0 -p tcp -m tcp -d 172.31.13.145 --dport 9999 --tcp-flags FIN,SYN,RST,ACK SYN -m connlimit --connlimit-above 8 --connlimit-mask 24 --connlimit-saddr -j DROP
-A INPUT -i eth0 -p tcp -m tcp -d 172.31.13.145 --dport 9999 --tcp-flags FIN,SYN,RST,ACK SYN -m connlimit --connlimit-above 2 --connlimit-mask 32 --connlimit-saddr -j DROP
-A INPUT -i eth0 -p tcp -m tcp -d 172.31.13.144 --dport 9999 --tcp-flags FIN,SYN,RST,ACK SYN -m connlimit --connlimit-above 8 --connlimit-mask 24 --connlimit-saddr -j DROP
-A INPUT -i eth0 -p tcp -m tcp -d 172.31.13.144 --dport 9999 --tcp-flags FIN,SYN,RST,ACK SYN -m connlimit --connlimit-above 2 --connlimit-mask 32 --connlimit-saddr -j DROP
-A INPUT -i eth0 -p tcp -m tcp -d 172.31.13.143 --dport 9999 --tcp-flags FIN,SYN,RST,ACK SYN -m connlimit --connlimit-above 8 --connlimit-mask 24 --connlimit-saddr -j DROP
-A INPUT -i eth0 -p tcp -m tcp -d 172.31.13.143 --dport 9999 --tcp-flags FIN,SYN,RST,ACK SYN -m connlimit --connlimit-above 2 --connlimit-mask 32 --connlimit-saddr -j DROP
-A INPUT -i eth0 -p tcp -m tcp -d 172.31.13.142 --dport 9999 --tcp-flags FIN,SYN,RST,ACK SYN -m connlimit --connlimit-above 8 --connlimit-mask 24 --connlimit-saddr -j DROP
-A INPUT -i eth0 -p tcp -m tcp -d 172.31.13.142 --dport 9999 --tcp-flags FIN,SYN,RST,ACK SYN -m connlimit --connlimit-above 2 --connlimit-mask 32 --connlimit-saddr -j DROP
#
-A INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 9999 -j ACCEPT
-A INPUT -p tcp -j ACCEPT
#
-A OUTPUT -o lo -j ACCEPT
#
-A OUTPUT -p tcp -m tcp --sport 9999 -m conntrack --ctstate NEW -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 9999 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 9999 -j ACCEPT
-A OUTPUT -j ACCEPT
COMMIT
#-----
Code:
/sbin/iptables-restore < /etc/iptables
* testing outgoing ip
* at each user :
Code:
curl ipecho.net/plain ; echo
/etc/sysctl.conf
Code:
#
net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.tcp_syncookies=1
net.ipv4.conf.all.accept_redirects=0
net.ipv6.conf.all.accept_redirects=0
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.all.accept_source_route=0
net.ipv6.conf.all.accept_source_route=0
net.ipv4.conf.all.log_martians=1
net.core.rmem_default=33554432
net.core.wmem_default=33554432
net.core.rmem_max=33554432
net.core.wmem_max=33554432
net.core.optmem_max=33554432
net.ipv4.tcp_rmem=10240 87380 33554432
net.ipv4.tcp_wmem=10240 87380 33554432
net.ipv4.ip_local_port_range=2000 65500
net.core.netdev_max_backlog=100000
net.ipv4.tcp_max_syn_backlog=80000
net.ipv4.tcp_max_tw_buckets=2000000
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_fin_timeout=5
net.ipv4.tcp_slow_start_after_idle=0
net.core.somaxconn=20480
fs.file-max=1000000
vm.swappiness=10
vm.min_free_kbytes=1048576
#
Code:
sysctl -p

* darkcoin.conf difference
* user nm01
Code:
externalip=x.x.9.246
bind=172.31.13.72
rpcport=9998
discover=0
* user nm02
Code:
externalip=x.x.6.15
bind=172.31.13.145
rpcport=9997
discover=0
* user nm03
Code:
externalip=x.x.12.226
bind=172.31.13.144
rpcport=9996
discover=0
* nms04
Code:
externalip=x.x.14.230
bind=172.31.13.143
rpcport=9995
discover=0
* nm05
Code:
externalip=x.x.16.165
bind=172.31.13.142
rpcport=9994
discover=0


This is my sample server pstree

[email protected]:~> pstree -u

systemd─┬─agetty
├─cron
├─darkcoind(user)───33*[{darkcoind}]
├─darkcoind(nm01)───28*[{darkcoind}]
├─darkcoind(nm02)───28*[{darkcoind}]
├─darkcoind(nm03)───28*[{darkcoind}]
├─darkcoind(nm04)───28*[{darkcoind}]
├─darkcoind(nm05)───28*[{darkcoind}]



* cpu usage of instance.
I am testing multiple setup with single instance.
https://darkcointalk.org/threads/ec2-multiple-remote-nothing-nm-max-5.1660/
m3.medium and t2.medium. each instance has five RC darkcoind running.

I think t2.medium is good choice for multiple NM with single instance.

m3.medium


t2.medium
 
Last edited by a moderator:

splawik21

Yeah, it's me....
Dash Core Group
Foundation Member
Dash Support Group
Apr 8, 2014
1,957
1,316
1,283
LOL so there is first multiMN on hot wallet...congratulations chaeplin! This is great but I`ll stay so far with my local/remote one.... meaby later when there will be multi local/remote I`ll try my chances do setup one :)
 

chaeplin

Active Member
Core Developer
Mar 29, 2014
749
356
133
* on instance setup, if public ip is assigned automatically, no of public ip can be 6.
1 public ip + 5 el ip. So add l more private ip.
 

chaeplin

Active Member
Core Developer
Mar 29, 2014
749
356
133
cpu usage added. m3.medium and t2.medium. with 5 RC darkcoind.
 

vertoe

Three of Nine
Mar 28, 2014
2,573
1,652
1,283
Unimatrix Zero One
chaeplin what are the prices of t2.medium and m3.medium compared to t1/t2.micro instances?
This approach looks hell complicated. What if I want to run - let's say - 25 masternodes. Isnt that a hell of maintainance?
 

chaeplin

Active Member
Core Developer
Mar 29, 2014
749
356
133
chaeplin what are the prices of t2.medium and m3.medium compared to t1/t2.micro instances?
This approach looks hell complicated. What if I want to run - let's say - 25 masternodes. Isnt that a hell of maintainance?
http://aws.amazon.com/ec2/pricing/ ;D

Darkcoin.conf is somewhat complicated. But it's all diffrent user.
Making a list of matched private ip = public ip = externalip = bind = rpcport =masternodeprivkey is key to success.
 

chaeplin

Active Member
Core Developer
Mar 29, 2014
749
356
133
Yeah this site says N/A for t2 instances, thought you might know...
hmmm


Region:US East (N. Virginia)US West (Oregon)US West (Northern California)EU (Ireland)Asia Pacific (Singapore)Asia Pacific (Tokyo)Asia Pacific (Sydney)South America (Sao Paulo)
$ US Dollar
vCPUECUMemory (GiB)Instance Storage (GB)Linux/UNIX Usage
General Purpose - Current Generation
t2.micro1Variable1EBS Only$0.013 per Hour
t2.small1Variable2EBS Only$0.026 per Hour
t2.medium2Variable4EBS Only$0.052 per Hour
m3.medium133.751 x 4 SSD$0.070 per Hour
m3.large26.57.51 x 32 SSD$0.140 per Hour
m3.xlarge413152 x 40 SSD$0.280 per Hour
m3.2xlarge826302 x 80 SSD$0.560 per Hour
 

karisu

Member
Jun 30, 2014
70
26
58
hmmm


Region:US East (N. Virginia)US West (Oregon)US West (Northern California)EU (Ireland)Asia Pacific (Singapore)Asia Pacific (Tokyo)Asia Pacific (Sydney)South America (Sao Paulo)
$ US Dollar
vCPUECUMemory (GiB)Instance Storage (GB)Linux/UNIX Usage
General Purpose - Current Generation
t2.micro1Variable1EBS Only$0.013 per Hour
t2.small1Variable2EBS Only$0.026 per Hour
t2.medium2Variable4EBS Only$0.052 per Hour
m3.medium133.751 x 4 SSD$0.070 per Hour
m3.large26.57.51 x 32 SSD$0.140 per Hour
m3.xlarge413152 x 40 SSD$0.280 per Hour
m3.2xlarge826302 x 80 SSD$0.560 per Hour
you should have a look at the reserved instances if you want to use ec2. They are much cheaper, but a root server with ipv6 would be the best choice if it works. I hope I can get my hands on one on Monday and check if it works...
 
  • Like
Reactions: chaeplin

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
My t1.micro instances are starting to get CPU maxed out on a fairly regular basis so I'm having a play with a t2.medium with a view to running a few on it. I've got one mnode up and running and it is purring like a kitten.

You're a genius chaeplin. Or at least I think you are because I'm not entirely certain what I'm doing ;)
 

vertoe

Three of Nine
Mar 28, 2014
2,573
1,652
1,283
Unimatrix Zero One
My t1.micro instances are starting to get CPU maxed out on a fairly regular basis so I'm having a play with a t2.medium with a view to running a few on it. I've got one mnode up and running and it is purring like a kitten.

You're a genius chaeplin. Or at least I think you are because I'm not entirely certain what I'm doing ;)
I wouldn't do anything else if I would try to monitor my cpu usage of all my masternodes. Just curious, why even bother? As long as payments are rolling in, I'm satisfied.
 

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
I wouldn't do anything else if I would try to monitor my cpu usage of all my masternodes. Just curious, why even bother? As long as payments are rolling in, I'm satisfied.
Curiosity and enjoyment of learning more about Linux and EC2 is my main driver.

I have also spotted that resource usage has been rising steadily on my masternodes since RC3 to the point that my t1.micro instances have been maxed out. I personally suspect that as the network. hash rate and transaction rate grows there will be a point that t1.micro won't cut it any more and will receive fewer payments than instances with more grunt. This is pure speculation on my part.

It is probably just a total fluke but I received 12DRK on the three masternodes that I put onto T2.medium in under 12 hours last night. I'm really just interested to see if there is any difference in payment performance of my t1.micro and t2.medium masternodes over an extended period of time.

As a hardware man I freak when I see utlisation over 30% average and my T1.micros were getting close to 100% average over multi hour periods.
 

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
As a business man I freak when I see costs eating up profits by more than 30% :tongue:
LOL. I sat down with my business partner last night (Mrs Stonehedge) and discussed exactly that! We decided that the possible benefit outweighed the cost increase given that we personally have confidence in all round growth for DRK.

My hypothesis is that (ignoring the issues with Amazon dominance) t1.micro will not be able to handle the demands of a non load balanced P2P network in the not too distant future and besides I'm having loads of fun learning new stuff anyway :)
 

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
Some observations.

Chaeplin's guide is really good. These are the two things I learned:

1) Look out for the typos in his iptables...there are couple of whitespaces missing ;)
2) If any of the IPs cannot be assigned, the whole allocation as defined in rc.local fails. In my case, I spent hours trying to work out what was wrong. Turned out that there is no need to include your instance default IP in rc.local. If you do, it will not work.

Other than that, the whole thing has been a success. I've got four nodes on a t2.medium chugging away and paying nicely :)

CPU and Bandwidth for 4 nodes:

Screen Shot 2014-09-09 at 18.42.15.png
Screen Shot 2014-09-09 at 18.42.46.png


Edit: Big thank you to chaeplin for helping me relearn a load of Linux that I had forgotten and learn some stuff that I didn't know already. It has been fun.

Ands its really easy to add new masternodes when you've got the funds!
 
Last edited by a moderator:
  • Like
Reactions: flare

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
5 masternodes running on a T2.medium and its purring like a kitten.

I am broke now though :)
 
  • Like
Reactions: flare

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
Just a quick update on the cost of a T2.medium.

I am paying $2.60 per day on average to run 5 masternodes on one T2.medium instance. I think this is good value for money.

EDIT: I misspoke. That $2.60 includes the 6 testnet T1.micro spot instances that I'm running.
 
Last edited by a moderator:
  • Like
Reactions: flare

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
Yup...$1.34 per day to run 5 masternodes on a t2.medium.

I've got the some of the setup scripted now. Maybe I'll start a masternode share service :)
 

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
Including tax and traffic?
That is inclusive of traffic but the cost explorer doesn't say whether the figure includes tax or not. I assume it excludes tax.

I think there is also a threshold where traffic is free...I've only been running this setup for a couple of weeks so lets see what happens over the month...
 

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
Just set up a sixth masternode on a single T2.medium. Seems to be running fine. For capacity planning purposes I think I'm going to cap it at 6 masternodes per T2.medium.

Amazon have given me 11 elastic IPs per region for some reason. I only asked them for one more.

Maybe one day I'll have enough masternodes to try this out on a bigger instance!
 

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
Damn nice, 11 IPs? How did you request them?
If you go to elastic IPs and click on the help symbol in the top right hand corner and scroll right to the bottom there is a link to a form to request a limit increase.

I only asked for an increase of one (Ireland) but they replied and said that I had a limit of 11 for that region now. Pretty cool I guess! If I'm lucky enough to own more masternodes in the future I'm going to see if I can keep them all on the same appliance just for fun.

On a separate note, I transferred some money from one bank to another yesterday in order to buy some DRK early today. The payment has been put on hold for fraud checks which means I won't get the money in my account until the banks open for business and I can tell them that I ordered the transfer. Lets hope the price stays stable for a few hours for me eh? I'm only a few DRk short of my next masternode! I thought they'd realise how volatile the markets can be ;)
 

crowning

Well-known Member
May 29, 2014
1,414
1,997
183
Alpha Centauri Bc
On a separate note, I transferred some money from one bank to another yesterday in order to buy some DRK early today. The payment has been put on hold for fraud checks which means I won't get the money in my account until the banks open for business and I can tell them that I ordered the transfer.
I had this with one of my banks once (a phone call cleared that up), and the second time they even would refuse to transfer the money at all and told me that they'll never ever will again (to that bank). Fuckers...

Thankfully I'm a customer of several banks, the second one didn't complain.
Never did so far :)
 

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
chaeplin I want to have seven masternodes on this instance but you can only have 6 private IPs per network interface! If you create a new network interface after an instance is running, then linux is not configured to use it. Any tips?

I found this useful but I could not get it to work. http://work.allaboutken.com/node/21

Essentially, I need a way to enable and configure eth1
 

chaeplin

Active Member
Core Developer
Mar 29, 2014
749
356
133
chaeplin I want to have seven masternodes on this instance but you can only have 6 private IPs per network interface! If you create a new network interface after an instance is running, then linux is not configured to use it. Any tips?

I found this useful but I could not get it to work. http://work.allaboutken.com/node/21

Essentially, I need a way to enable and configure eth1
I don't think secondary interface is good choice.
Anyway you can use ifconfig to set up eth1.

I suggest you to use "Network Interfaces" in EC2 Management Console to add more private IPs.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
There is "Manage Private IP Address" in Action menu.
Add more private IP and associate it to Elastic IP.
 

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
Thanks chaeplin.

I agree that secondary interface isn't ideal. There is a fault with my AWS account that I'm waiting for Amazon to fix which means that I can't access new instances that I create so I thought I'd find a work around in my spare time.

If you add a new network interface and assign an elastic IP and enable it within Ubuntu, I think you need to set up routes to make it work. No matter what I tried, after creating new interface and then nm02 darkcoind > private IP > Public IP > I could not get nm07 to talk to the outside world through eth1. In fact, after configuring rc.local and iptables, nm07 account had no public IP.

I'm an amateur and I have met my match. When Amazon solve my mysterious problem (new instances don't work with their key pairs!) I'll start an new instance and stick to six max per T2.medium.

Thanks for your advice anyway.
 

Propulsion

The buck stops here.
Feb 26, 2014
1,008
468
183
Dash Address
XerHCGryyfZttUc6mnuRY3FNJzU1Jm9u5L
Yup...$1.34 per day to run 5 masternodes on a t2.medium.

I've got the some of the setup scripted now. Maybe I'll start a masternode share service :)
Have you actually received a monthly invoice yet for the total cost? I'm seriously interested in moving multiple micros to a single instance if possible. My last invoice was around $130.00 if I remember correctly.
 

stonehedge

Well-known Member
Foundation Member
Jul 31, 2014
696
333
233
Propulsion

My bill for this month is going to be all over the place because I've got instances running on testnet, NOMP instances etc etc but I can confirm the following:

My T2.medium is costing $1.34 per day with 6 masternodes running on it.
My masternode instance is the only instance in that region so if I pro rata up the data transfer fees for that region it comes to $7.50.

My estimate is that the total cost will be $50+tax based on a 31 day month. I will clarify this further when I've had a chance to go through my bills in more detail at the end of the month.

Now, if only somebody could help me find a way to enable a second network interface and get a 7th node on there for me :)

M3.large seems a bit jump in price to host 7 to 10 masternodes! Looking at $100pm on that. No other EC2 instance within a reasonable price range will allow more than 6 ips per interface.

I just love the idea of having one instance with them all on...it works so well. I might have to look elsewhere other than AWS unless somebody can help me with the linux challenge of enabling eth1 in unbuntu and getting it working properly