Wednesday, December 4, 2013

OpenStack Havana is ready on India

We are happy to announce that OpenStack's Havana version is ready on India.

If you already have account on FutureGrid, you can use it with this steps.
ssh <username>@india.futuregrid.org
module load novaclient
source ~/.futuregrid/openstack_havana/novarc

If you belong to multiple projects, you can easily switch your tenant/project, with this.
source ~/.futuregrid/openstack_havana/<project_id>

Other than the initial setting, the rest of how-to is the same as our OpenStack Grizzly Manual.

If you find an issue, please submit a ticket.
https://portal.futuregrid.org/help

And also, OpenStack Orchestration, called Heat, will be available soon.

Thanks,

Tuesday, November 5, 2013

New features in Phantom available on FutureGrid

We recently released Summer Highlights for the Phantom project: many new exciting features got added, including support for multi-cloud appliances and contextualization. Head over to the Nimbus Project web site for a full description of the improvements.

Phantom allows the user to deploy a set of virtual machines over multiple private, community, and commercial clouds and then automatically grows or shrinks this set based on policies defined by the user. This elastic set of virtual machines can then be used to implement scalable and highly available services.

Phantom has been introduced to the FutureGrid community at the XSEDE 2012 conference with examples from early Phantom users. A generally available Phantom FutureGrid service (alpha) was announced earlier this year.



Since then the FutureGrid Phantom service has deployed over 14,000 virtual machines on behalf of 26 users using it as a portal to multiple FutureGrid clouds, to cloud burst from FutureGrid to Amazon EC2, or to leverage its monitoring, scalability and high availability features. If you want to try Phantom too, follow our documentation page that will get you started very quickly.

We are continuously working hard to make Phantom a more powerful platform for FutureGrid. We are planning exciting new features in the fall!

Tuesday, October 22, 2013

Video: Using OpenStack command line tools



This lesson, Using OpenStack command line toolsexplains how to use the OpenStack Commandline tools on the FutureGrid cluster called sierra.futuregrid.org. 

For written material, see section OpenStack Grizzly in the FutureGrid manual.

Visit FutureGrid's Massive Open Online Course (MOOCs) site for more video based learning opportunities.

Thursday, October 17, 2013

Newly Released OpenStack Havana on FutureGrid

OpenStack Havana was released today. FutureGrid is happy to announce that undergraduates 
enrolled in  a course for young computer scientists working in the field of software and 
system will be among the first to gain hands-on experience with FutureGrid's 
implementation of the new version of OpenStack on India. For more about this course, 
please visit the FG-368 project page: https://portal.futuregrid.org/projects/368

Work is underway to make OpenStack Havana available to other FutureGrid users as soon as 
possible. Queries about FutureGrid's OpenStack Havana implementation should be sent to 
FutureGrid using the ticketing system:  https://portal.futuregrid.org/help

FutureGrid will continue to make OpenStack Grizzly available on Sierra.

For an overview of key features made available by the new OpenStack Havana release, 
please visit ZDNet's article, OpenStack Havana: Open-source cloud for the enterprise:
http://www.zdnet.com/openstack-havana-open-source-cloud-for-the-enterprise-7000022086/

Tuesday, October 8, 2013

Supernova instead of Nova?

We installed Supernova on Sierra, which is a very useful tool for OpenStack. If you say yes to any of the following questions, you should try Supernova.
  • Do you have multiple projects(tenants)?
  • Do you have another OpenStack to run your instances?
So here's how to use Supernova on Sierra.
* Before you try Supernova on FutureGird, you have to have your FutureGrid account and should know how to use OpenStack. Here's the link of our OpenStack Tutorial -> http://manual.futuregrid.org/openstackgrizzly.html
Login to Sierra and load Supernova.
ssh username@sierra.futuregrid.org
module load supernova

Check your novarc file and create your ~/.supernova file like below. In this example, my account(user1) is a member of fg123 and fg456.
[fg123]
OS_AUTH_URL=https://s77r.idp.sdsc.futuregrid.org:5000/v2.0
OS_CACERT=/etc/futuregrid/openstack/sierra/cacert.pem
OS_PASSWORD=***************
OS_TENANT_NAME=fg123
OS_USERNAME=user1

[fg456]
OS_AUTH_URL=https://s77r.idp.sdsc.futuregrid.org:5000/v2.0
OS_CACERT=/etc/futuregrid/openstack/sierra/cacert.pem
OS_PASSWORD=***************
OS_TENANT_NAME=fg456
OS_USERNAME=user1

Don't forget to change the permission to make it unreadable to anyone else.
chmod 600 ~/.supernova

Now, you should be able to check your tenant list.
supernova --list
-- fg123 --------------------------------------------------------------------
  OS_AUTH_URL          : https://s77r.idp.sdsc.futuregrid.org:5000/v2.0
  OS_CACERT            : /etc/futuregrid/openstack/sierra/cacert.pem
  OS_PASSWORD          : ***************
  OS_TENANT_NAME       : fg123
  OS_USERNAME          : user1
-- fg456 --------------------------------------------------------------------
  OS_AUTH_URL          : https://s77r.idp.sdsc.futuregrid.org:5000/v2.0
  OS_CACERT            : /etc/futuregrid/openstack/sierra/cacert.pem
  OS_PASSWORD          : ***************
  OS_TENANT_NAME       : fg456
  OS_USERNAME          : user1

The usage is the same as Nova Client. You just need to replace "nova" to "supernova <tenantname>". Here's how to boot an instance on each tenant.
supernova fg123 boot --image futuregrid/ubuntu-12.04 --flavor m1.small --key-name mykey fg123vm001
supernova fg456 boot --image futuregrid/ubuntu-12.04 --flavor m1.small --key-name mykey fg456vm002
supernova fg123 list
supernova fg456 list

Very easy and very useful, isn't it?

In my work, I have multiple tenants and multiple OpenStack sets to work with, so Supernova is very very very helpful. I hope it makes your work easier, too!


Thanks,

Thursday, October 3, 2013

Nimbus/Alamo Updates

qcow2 images on Nimbus/Alamo 

Support for the qcow2 virtual machine image format has been added to Nimbus on Alamo. One feature of qcow2 is that the size of an image on disk grows only as data is added to it. This is in contrast to a raw image file that is of a fixed size (the size of the virtual disk) which we typically gzip to reduce storage space.

One of the benefits of using qcow2 images is that Nimbus does not have to perform a gzip or gunzip step when running and saving virtual machines. This reduces the time to start and save virtual machines significantly. We will be adding qcow2 images for Linux distributions over time. If you wish to use specific distributions/versions, please let us know. An important note is that the qcow2 images provided by some distributions (e.g. Ubuntu, Fedora), don't work well out of the box with Nimbus. The reason for this is that while Nimbus adds your ssh key to the root account of the virtual machine, these images do not allow direct login to the root account (they expect you to log in to a normal user account such as ubuntu or ec2-user and then use sudo).

Nimbus/Alamo networking improvements

The network device used by Nimbus virtual machines on Alamo has been changed from Realtek RTL-8139 to Virtio. After this change, our measurements show that network bandwidth to and from Nimbus virtual machines has improved by a factor of 3 or more. The Virtio device is supported by all of the existing virtual machine images that we tested. Your virtual machines should automatically detect this new device and benefit from the higher networking bandwidth. If you have any problems with networking on your virtual machines, please submit a ticket.

Nimbus/Alamo authentication changes

Due to the expiration of a CA certificate, the Nimbus services on Alamo are now using a new host certificate from a different certificate authority. We apologize for the inconvenience, but this will require that you make some changes to your Nimbus client to continue to use Alamo. The first change is to use a Nimbus cloud client that includes the CA certificates associated with our new Nimbus certificate. One way to accomplish this is to download the Nimbus cloud client version 022. An alternative is to modify your existing cloud client by manually adding the CA certificates to it. This can be accomplished by:

  1. Download the InCommon Server CA certificate and save it as nimbus-cloud-client-021/lib/certs/84df5188.0 
  2. Go to the Comodo AddTrust page and download the AddTrust External CA Root certificate. Save it as nimbus-cloud-client-021/lib/certs/3c58f906.0 
The second change is to modify your nimbus-cloud-client-022/conf/alamo.conf file to have the DN of the new certificate, like so:

  •  vws.factory.identity=/C=US/2.5.4.17=78711/ST=TX/L=Austin/2.5.4.9=1 University Station/O=The University of Texas at Austin/OU=TACC - Texas Advanced Computing Center/CN=nimbus.futuregrid.tacc.utexas.edu 
 If you encounter any problems using Nimbus on Alamo, please submit a ticket for assistance.


The response of Sierra's OpenStack is improved.

It is good news that the response of Nova Client is improved on Sierra's OpenStack.

This issue doesn't show up on Essex version, so India's OpenStack is fine. But Grizzly version has lots on Keystone's database, and it had been making the authentication process slow. We took a serious look and tuned MySQL configuration.

Here's the result of the improvement.

Before:
$ time nova list
+--------------------------------------+--------+--------+---------------------+
| ID                                   | Name   | Status | Networks            |
+--------------------------------------+--------+--------+---------------------+
| 8618a539-2162-41e6-92b1-c8177ebe100a | master | ACTIVE | private=10.35.23.9  |
| c373a455-e99c-45a6-be8f-ca75117fe9cb | node1  | ACTIVE | private=10.35.23.10 |
| 07eaa184-a8c6-4ece-9f35-5d3ba8c9c1e0 | node2  | ACTIVE | private=10.35.23.15 |
| bbc7408f-e8ba-45cb-b294-e857c53e9f7f | node3  | ACTIVE | private=10.35.23.55 |
+--------------------------------------+--------+--------+---------------------+

real    0m8.819s
user    0m0.307s
sys 0m0.127s

After:
$ time nova list
+--------------------------------------+--------+--------+---------------------+
| ID                                   | Name   | Status | Networks            |
+--------------------------------------+--------+--------+---------------------+
| 8618a539-2162-41e6-92b1-c8177ebe100a | master | ACTIVE | private=10.35.23.9  |
| c373a455-e99c-45a6-be8f-ca75117fe9cb | node1  | ACTIVE | private=10.35.23.10 |
| 07eaa184-a8c6-4ece-9f35-5d3ba8c9c1e0 | node2  | ACTIVE | private=10.35.23.15 |
| bbc7408f-e8ba-45cb-b294-e857c53e9f7f | node3  | ACTIVE | private=10.35.23.55 |
+--------------------------------------+--------+--------+---------------------+

real    0m0.929s
user    0m0.309s
sys 0m0.118s

Thanks,

Sunday, September 29, 2013

Try CoreOS and Docker at Futuregrid

Docker and CoreOS are interesting new software. You can try them on FutureGrid.

Before you try this, you need to have your FutureGrid account, of course, and should know how to use OpenStack at FutureGrid. Here's the link --> http://manual.futuregrid.org/openstackgrizzly.html

First, login to sierra and setup your OpenStack environment.
ssh username@sierra.futuregrid.org
module load novaclient
source .futuregrid/novarc

Boot an instance with CoreOS image.
nova boot coreos1 --flavor m1.small --image futuregrid/coreos --key_name keyname

Check the status of the instance with "nova list", and if it's "ACTIVE", login to the instance
ssh -i /path/to/your/pub-key core@10.35.23.119
Warning: Permanently added '10.35.23.119' (RSA) to the list of known hosts.
   ______                ____  _____
  / ____/___  ________  / __ \/ ___/
 / /   / __ \/ ___/ _ \/ / / /\__ \
/ /___/ /_/ / /  /  __/ /_/ /___/ /
\____/\____/_/   \___/\____//____/
core@coreos1 ~ $

Execute "echo Hello World!" inside a container.
docker run base /bin/echo Hello World!
Unable to find image 'base' (tag: latest) locally
Pulling repository base
b750fe79269d: Download complete
27cf78414709: Download complete
Hello World!

What it does is, 1. download the "base" image, 2. execute "echo Hello World!". The downloading process only happens at the first time. So if you execute "docker run base /bin/echo Hello World!" again, you can see what I meant.

Next, you can login to a container with this.
docker run -i -t base /bin/bash

If you check the OS with "lsb_release -a", you can find the base image is Ubuntu 12.10.
root@d7a0470c0e16:/# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.10
Release: 12.10
Codename: quantal
root@d7a0470c0e16:/# exit
exit
core@coreos1 ~ $ 

Next, execute a command in the background.
docker run -i -t -d base /bin/ping www.google.com
81e7918c9724

You can see the status of the container with this.
docker ps
ID                  IMAGE               COMMAND                CREATED             STATUS              PORTS
81e7918c9724        base:latest         /bin/ping www.google   41 seconds ago      Up 40 seconds

Take a look at the container.
docker logs 81e7918c9724
PING www.google.com (74.125.224.211) 56(84) bytes of data.
64 bytes from lax02s02-in-f19.1e100.net (74.125.224.211): icmp_req=1 ttl=53 time=9.96 ms
64 bytes from lax02s02-in-f19.1e100.net (74.125.224.211): icmp_req=2 ttl=53 time=9.97 ms
64 bytes from lax02s02-in-f19.1e100.net (74.125.224.211): icmp_req=3 ttl=53 time=10.0 ms
64 bytes from lax02s02-in-f19.1e100.net (74.125.224.211): icmp_req=4 ttl=53 time=9.95 ms
64 bytes from lax02s02-in-f19.1e100.net (74.125.224.211): icmp_req=5 ttl=53 time=10.0 ms
64 bytes from lax02s02-in-f19.1e100.net (74.125.224.211): icmp_req=6 ttl=53 time=9.96 ms
64 bytes from lax02s02-in-f19.1e100.net (74.125.224.211): icmp_req=7 ttl=53 time=9.95 ms

You can attach to the container with this.
docker attach 81e7918c9724
* When you want to detach, press "Ctrl-p" and "Ctrl-q".

Terminate the command.
docker kill 81e7918c9724
docker ps

So, now you think your container is gone, but actually what you did is stored. You can see the list with this.
docker ps -a -notrunc
ID                                                                 IMAGE               COMMAND                    CREATED             STATUS              PORTS
1c94ff1e5c6a180c525e7dc709e59d3f5275d38bcfbc057862eeb3ce8821e1e8   base:latest         /bin/ping www.google.com   13 minutes ago      Exit 0
64cf0b3a9f6b30bf794f312ec72596a7bb3b465dc2d93c1513228ecee8da3238   base:latest         /bin/bash                  24 minutes ago      Exit 0
a5000a2ba33503775eab2e0740b906971e29abad87d934d48efff6c6ab061fcd   base:latest         /bin/echo Hello World!     37 minutes ago      Exit 0

You can delete one with "docker rm <ID>" like this.
docker rm 1c94ff1e5c6a180c525e7dc709e59d3f5275d38bcfbc057862eeb3ce8821e1e8
docker ps -a -notrunc

Also, if you commit one of them, it will be saved as an image.
docker commit -m "My first container" 1c94ff1e5c6a180c525e7dc709e59d3f5275d38bcfbc057862eeb3ce8821e1e8 username/first_container

So, now you have your first custom image on the list. Means that you can install software packages, add something more as needed, commit the change, and execute a command or run a daemon very quickly. It's all done inside the container!
docker images
REPOSITORY                TAG                 ID                  CREATED             SIZE
username/first_container   latest              40cd6f5b996e        12 seconds ago      16.39 kB (virtual 180.1 MB)
base                      latest              b750fe79269d        6 months ago        24.65 kB (virtual 180.1 MB)
base                      ubuntu-12.10        b750fe79269d        6 months ago        24.65 kB (virtual 180.1 MB)
base                      ubuntu-quantal      b750fe79269d        6 months ago        24.65 kB (virtual 180.1 MB)
base                      ubuntu-quantl       b750fe79269d        6 months ago        24.65 kB (virtual 180.1 MB)


...So, with using Docker, you can build and train your system as a container and save it like a file very quickly. (Docker only overwrites the differences from the base image and save it in "/var/lib/docker/containers/", which is why it's so quick.) Also, the great thing about Docker is you can share your image on https://index.docker.io/.

One more interesting thing to say here is, the CoreOS is based on Google’s ChromeOS that automatically updates. So CoreOS is like a ChromeOS on the Cloud, but it's, of course, more than that. To my knowledge, the idea of CoreOS is to build your massive cluster on the Cloud as easily and quickly as possible.

I'll do more research about them. So please keep checking this blog. Thanks!

For more information:

Tuesday, September 24, 2013

Please participate in the FutureGrid user survey

We welcome your feedback all year round, but this week we're conducting an anonymous user survey designed to help us recognize our strengths, identify areas that call for improvement, and focus our time, energy, and resources to most effectively meet the needs of the FutureGrid community.  

We have sent emails to all FutureGrid users.  Please check your inbox and perhaps your spam folder, if necessary.  The survey is brief and should only take a few minutes of your time.  We would appreciate hearing from you in the next day or two.  If you have already responded, we thank you for your thoughtful insights.

We will be sending out a reminder email to users who have not yet responded.  Hopefully you'll receive it and be able to participate.  It means a lot to us when community members take time to share about their experiences with FutureGrid.

We will be reading the survey feedback carefully and factoring it in as we plan for Year 5.  Anything requiring immediate attention from FutureGrid systems and support teams should be addressed through the help ticket system:  https://portal.futuregrid.org/help

Thursday, September 19, 2013

CentOS 6, SL 6 and Debian 7 are available on OpenStack

We are happy to announce that we have added some more useful cloud images on our OpenStack. Now we have CentOS 6, Scientific Linux 6 and Debian 7. And interesting CoreOS too.

[ktanaka@s1 images]$ nova image-list|grep futuregrid
| 53fab752-757e-4b2a-bce6-9f74ba76be26 | futuregrid/centos-6                   | ACTIVE |                                      |
| d5b19d33-8440-4069-815a-de9d8629dae3 | futuregrid/coreos                     | ACTIVE |                                      |
| d40facd1-7496-42b7-8bc7-70235396d349 | futuregrid/debian-7                   | ACTIVE |                                      |
| 18c437e5-d65e-418f-a739-9604cef8ab33 | futuregrid/fedora-18                  | ACTIVE |                                      |
| 1c46e959-5805-47da-a079-58900787ef25 | futuregrid/fedora-19                  | ACTIVE |                                      |
| 8f289ebb-d8fb-48f6-8429-430110eacb4a | futuregrid/sl-6 <- Scientific Linux   | ACTIVE |                                      |
| 1a5fd55e-79b9-4dd5-ae9b-ea10ef3156e9 | futuregrid/ubuntu-12.04               | ACTIVE |                                      |
| f7459a50-3ef4-40f5-a7d7-955fb3af6432 | futuregrid/ubuntu-13.10               | ACTIVE |                                      |

You can boot your instance with this:
nova boot --image <image name> --flavor m1.small --key-name <your key> <instance name>

For more information about how-to, please visit our OpenStack Grizzly Manual.

Also, if you're interested in CoreOS, please check the link.
http://coreos.com/docs/using-coreos/

Thanks,

Friday, September 13, 2013

OpenStack Tips: Download a cloud-image on India's OpenStack and upload to Sierra's OpenStack

Here's how to download an image from India's OpenStack and upload it to Sierra's OpenStack.

1. On Sierra, make sure you've done step1 and step2 of our grizzly tutorial.

2. On your local machine, download your credential from india.
scp -rp [username]@india.futuregrid.org:~/.futuregrid .futuregrid.india
3. On your local machine, upload the credential to sierra. (*Do not overwrite your.futuregrid on sierra.)
scp -rp .futuregrid.india [username]@sierra.futuregrid.org:~/.futuregrid.india
4. On Sierra, load india's credential.
source ~/.futuregrid.india/openstack/novarc
module load novaclient
5. On Sierra, check your image id and download your image/ramdisk/kernel.
nova image-list|grep [name]
glance image-download [id] --file [file name to save]
6. On Sierra, load sierra's novarc.
source ~/.futuregrid/novarc
7. On Sierra, upload them.
glance image-create --name= --disk-format=aki --container-format=aki < kernel-file
glance image-create --name= --disk-format=ari --container-format=ari < ramdisk-name
glance image-create --name= --disk-format=ami --container-format=ami --property kernel_id=[kernel_id] --property ramdisk_id=[ramdisk_id] < your-image-file.img

Thursday, July 25, 2013

Science Clouds or It Takes a Village to Give a Talk on Cloud Computing


Wednesday July 24, 2013 Kate Keahey, David Lifka, Manish Parashar, Warren Smith, Carol Song, and Shaowen Wang facilitated a Birds of a Feather at XSEDE13 entitled Science Clouds or It Takes a Village to Give a Talk on Cloud Computing.

"The objective of this BOF is to bring together a community of current and prospective XSEDE cloud users and both provide and solicit information on scientific cloud use. We will describe existing cloud offerings in academic space, such as FutureGrid and Red Cloud, describe examples of successful scientific use and discuss with the participants the challenges the community faces when accessing clouds and ways of addressing them, as part of an “approach clinic”. We will then brainstorm ideas on approaches, tools and initiatives that could make the current use of cloud easier and how to evaluate the potential of infrastructure clouds in the context of specific application groups."

For a more complete description of the BOF, please view the XSEDE13 schedule.

Download the slides for the BOF here. [PPTX]

HOW TO MAKE A MOOC

FutureGrid community members are invited to create MOOC resources related to the work they are developing through their use of FutureGrid. 

For information on how to create a FutureGrid MOOC, please visit the How To Make A MOOC page:  https://portal.futuregrid.org/mooc

FutureGrid XSEDE13 Presentations highlight MOOCs


Details about how MOOC's are opening up exciting opportunities for education and training in HPC, Grid, and Cloud computing can be found in these two presentations delivered on July 24, 2013 at XSEDE13, San Diego, CA:

FutureGrid UAB Meeting [PTTX] - Gregor von Laszewski and Geoffrey Fox

Tuesday, July 23, 2013

FutureGrid Upgrades Support of Educational Use

As part of an ongoing commitment to education FutureGrid announces prototypes of Massive Open Online Courses (MOOCs) designed for use in class or as out of class supplemental material for courses covering HPC, Grid, and Cloud Computing.    
Currently two FutureGrid MOOCs are available.
The Intro to FutureGrid MOOC, divided into six units, covers core capabilities: setting up a portal account and project; using non virtualized HPC nodes; and explores two approaches to virtual machines: using OpenStack Grizzly release and using the Eucalyptus system.  Instructors for the Intro to FutureGrid MOOC are FutureGrid PI Geoffrey Fox and FutureGrid Software Architect Gregor von Laszewski.
In addition FutureGrid MOOCs will cover advanced topics.  We begin with FutureGrid TEOS Team Leader Renato J. Figueiredo's IP-over-P2P (IPOP) MOOC: Network Virtualization in Infrastructure-as-a-Service (IaaS) Cloud Computing.   The IPOP MOOC has twelve units of instruction.  Students who take this MOOC will learn about the role of virtual networks in IaaS, the key abstractions and techniques used to implement them, challenges and techniques applied to network virtualization across multiple IaaS clouds, and will learn about the architecture and usage of the open-source IPOP virtual network through hands-on exercises.
Further MOOC components are under development and will be available in the coming months.  Comments and suggestions are welcome.  
FutureGrid community members are invited to create MOOC resources related to the work they are developing through their use of FutureGrid. For information on how to create a FutureGrid MOOC, please visit https://portal.futuregrid.org/mooc
The FutureGrid MOOCs are available at:  https://fgmoocs.appspot.com

Friday, July 19, 2013

Science Clouds BOF at XSEDE 13

FutureGrid invites you to engage in the Science Clouds Birds of a Feather at XSEDE 13.

Date: Wednesday July 24, 2013
Time:  6:00 - 7:00pm
Location:  La Mesa, Marriott Marquis & Marina, San Diego, CA

Organizers: Kate Keahey, Lifka David, Manish Parashar and Warren Smith

The objective of this BOF is to bring together a community of current and prospective XSEDE cloud users to both provide and solicit information on scientific cloud use.

The BOF will consist of two parts. The first part will be informational; we will describe science clouds available on FutureGrid as well as Red Cloud and their capabilities, give examples of scientific and educational projects successfully using those clouds, as well as an overview of selected software tools facilitating cloud use. In the second part of the BOF we will discuss the motivation of participating XSEDE communities to move to cloud computing, expected benefits, as well as existing experiences based on the XSEDE cloud survey.

Please visit XSEDE 13 event listing for further details: http://sched.co/YcYA3J

We hope to see you there.

Wednesday, July 10, 2013

Service disruption

Update:  OK... Everything is back up and running.  Thanks for your patience as we responded to an unplanned power outage.

Update:  HPC -- India, bravo, delta and echo are back online. Cloud -- OpenStack and Eucalyptus are back online.  Xray is still being worked on.  

Earlier posts:  

Xray is down due to power failure.  Other IU resources may be unstable. FutureGrid Portal is up and running.  Check FutureGrid status page in the portal for updates as we work to restore access to the systems.  

FutureGrid resources at Indiana University - including the portal -  may not be accessible at this time.  We are investigating.  We will have things up and running again as soon as possible.  Thanks for your patience. 

Thursday, June 13, 2013

FutureGrid provided hands on experience for participants at Cloud Computing and Software-defined Networking tutorial at Mini-PRAGMA 2013, Indonesia



FutureGrid Co-PI Jose Fortes (University of Florida) conducted a 1-day tutorial on Cloud Computing and Software-defined Networking at Universitas Indonesia on June 3, 2013. This event was part of a workshop organized by PRAGMA and Universitas Indonesia on topics related to middleware, international collaboration and biodiversity. 

The tutorial broadly covered: 

"What, Why, When of Cloud Computing" 
This session introduced attendees to the different types of clouds (SaaS, PaaS, IaaS and others), explained the benefits of their services, illustrated successful uses of cloud systems; examples using commercial and/or public clouds were considered.
"How to Build a Cloud" 
This session introduced attendees to open-source software that can be used to manage and deliver cloud services. Examples using Openstack were covered.
"What, Why, When of Software-Defined Networking" 
This session introduced the key concepts and basic technologies for software-defined networking and illustrated them with applications. Openflow and virtual networking were discussed.
"How to Use One (or More) Clouds" 
This session introduced the attendees to advanced concepts such as the Intercloud and the use of Hadoop for big-data applications.

The tutorial included demos and hands-on activities where students used FutureGrid resources to deploy concepts and software introduced during the tutorial. There were forty participants including students, research assistants, researchers, system administrator, and faculty. They came from various background such as Mathematics, Computer Science, Physics, Electrical Engineering, and Pharmacy of Universitas Indonesia.

Monday, June 3, 2013

Service Outage --- June 4, 2013 --- Indiana University Resources.

All FutureGrid resources at Indiana University will be down on June 4, 2013 from 6:00 AM -- 5:00 PM EDT due to planned electrical work and equipment relocation in the Indiana University Data Center.  This will affect:
Email to help@futuregrid.org will be queued during the service outage.  We will respond to any email as soon as possible after services are restored.

We will update this post as work progresses.

Current Status:
  • 06/04/2013 06:00 AM: Service shutdowns will begin.
  • 06/04/2013 12:00 PM: Work progressing on schedule.
  • 06/04/2013 06:30 PM: Electrical work completed, network reconfiguration still in progress.
  • 06/04/2013 08:30 PM: Network reconfiguration still in progress.  Xray is available.
  • 06/04/2013 11:15 PM: All resources should now be available.  Please report any issues to help@futuregrid.org

Monday, February 25, 2013

Tips: Source installation on your home directory

Hi, this is my first post. I'd like to share some tips on this blog.

The first tip is source installation of software. Sometimes I want to try some new version of software for fun or for study, and install it on my home directory. The source installation is basically 5 steps.

1. Download
 wget http://www.url.com/path/to/software.tar.gz
 or
 wget http://www.url.com/path/to/software.tar.bz2
2. Uncompress
 tar zxvf software.tar.gz
 or
 tar jxvf software.tar.bz2
3. Setup
 ./configure --prefix=/path/to/install
4. Build
 make
5. Install
 make install

So here's an example, installing pre-released OpenMPI version 1.7rc6 on my home for using bravo.

First, login to india and submit an interactive job on bravo queue.
 ssh myaccount@india.futuregrid.org
 qsub -I -l nodes=1:ppn=8 -q bravo
"-I" = interactive mode
"-l nodes=1:ppn=8" = reserve 1 node and 8 processors per node
"-q bravo" = submit a job on bravo cluster

Download OpenMPI 1.7rc6 from the website(http://www.open-mpi.org/software/ompi/v1.7/)
 wget http://www.open-mpi.org/software/ompi/v1.7/downloads/openmpi-1.7rc6.tar.bz2

Uncompress
 tar jxvf openmpi-1.7rc6.tar.bz2

Create a directory for software, and setup the installation.
 mkdir -p /N/u/myaccount/opt/test
 cd openmpi-1.7rc6
 ./configure --help
 ./configure --prefix=/N/u/myaccount/opt/test/openmpi-1.7rc6

Build
 make

Then, install
 make install

Installation is done.

This is optional, I would usually add the path and the library path by putting these lines at the bottom of my .bashrc.
 # OpenMPI-1.7
 export OPENMPI=/N/u/myaccount/opt/test/openmpi-1.7
 export PATH=$OPENMPI/bin:$PATH
 export LD_LIBRARY_PATH=$OPENMPI/lib:$LD_LIBRARY_PATH

Now I have this new version of openMPI for me. So, MPI benchmark would be good for the next topic.

Tuesday, February 19, 2013

Announcing Nimbus Phantom Alpha on FutureGrid


We recently released Nimbus Phantom as a hosted service running on FutureGrid

Nimbus Phantom is a hosted service, running on FutureGrid, that makes it easy to leverage on-demand resources provided by infrastructure clouds. Phantom allows the user to deploy a set of virtual machines over multiple private, community, and commercial clouds and then automatically grows or shrinks this set based on policies defined by the user. This elastic set of virtual machines can then be used to implement scalable and highly available services. An example of such service is a caching service that stands up more workers on more resources as the number of requests to the service increases. Another example is a scheduler that grows its set of resources as demand grows.

Currently Phantom works with all FutureGrid Nimbus and OpenStack clouds as well as Amazon and the XSEDE wispy cloud (the only XSEDE cloud for now). A user can access it via two types of clients: an easy to use web application and a scripting client. The scripting client is the boto autoscale client as Phantom currently implements Amazon Autscaling API – so you can think of it as Amazon Autoscale for FutureGrid clouds that also allows for cloudburst to XSEDE and commercial clouds and is easy to extend with your own policies and sensors.

The Nimbus Phantom web interface in action
The simplest scenario for using Phantom is as a gateway for deploying and monitoring groups of virtual machines spread over multiple FutureGrid clouds. In a more complex scenario you can use it to cloudburst from FutureGrid clouds to Amazon. Finally, you can use it to explore policies that will automate cloudbursting and VM allocations between multiple clouds.

For more information and/or to use the service go to www.nimbusproject.org/phantom. It should take no more than 10-15 minutes to start your own VMs. Happy scaling and we will appreciate your feedback!

Monday, February 4, 2013

CM - Cloud mesh a simple tool to manage multiple virtual machines.

CM - Cloud mesh a simple tool to manage multiple virtual machines.

Although euca2ools and nova clients provide a simple interface to IaaS frameworks, they do not provide the convenience I needed to start, manage, and experiment with multiple parallel virtual machines.

Hence I wrote a little tool called "cloud mesh" or short cm for FutureGrid. A small video about some of its functionality is shown here:

The code and instalation instructions can be found here:

FutureGrid and ConPaaS

I am visiting Vrije Universiteit in Amsterdam for a sabbatical, and a major collaboration has been with the ConPaaS team led by Thilo Kielmann and Guillaume Pierre (who's now at Rennes). One of the main activities in our collaboration has been the integration of technologies that are part of the FutureGrid software stack - the IPOP virtual network and the Grid appliance "bag-of-tasks" middleware - with the ConPaaS Platform-as-a-Service system. This has been a very interesting activity, as I get  a chance to learn more about various PaaS technologies, and witness the possible benefits that IPOP/Grid appliance technologies bring to other projects.

First, a bit of background. IPOP is an "IP-over-P2P" overlay that self-organizes virtual private networks across the Internet; the Grid appliance is a virtual machine appliance that encapsulates IPOP and high-throughput computing middleware (HTCondor) to enable easy-to-deploy self-configuring virtual clusters for "bag-of-tasks" applications. FutureGrid users have access to run Grid appliances across its infrastructure for research and educational projects (check out our tutorials for details on how to run the Grid appliance on FutureGrid).  ConPaaS is part of the EU Contrail project; it provides a runtime environment to easily run applications on the cloud by deploying full-fledged platforms as a service. Its applications include Web hosting, task farming, and Map/Reduce distributed computing.

What IPOP can bring to ConPaaS is the ability to create virtual private clouds that span across multiple providers, as well as handling network address translation (NAT) and firewall devices. Essentially, IPOP-enabled ConPaaS services can communicate over a VPN that is decoupled from the public Internet. The concepts behind the Grid appliance are applicable to one of the services that ConPaaS offers - task farming. By encapsulating the IPOP virtual networks in virtual machine images, a task-farming service can transparently aggregate resources across multiple cloud providers into a single virtual resource pool that is scheduled by HTC middleware - for independent tasks, as well as for workflows.

So, there is a nice synergy between these systems, but getting complex software systems to work together is where the rubber meets the road. As a first concrete step in the integration, we had a productive hands-on meeting this month - where we installed and tested IPOP in the ConPaaS image, shook off a couple of bugs/configuration issues, and worked out a path towards integrating IPOP as a virtual network service that is optionally enabled by a ConPaaS user. The initial tests went well, and we were able to connect ConPaaS VMs running across Amazon EC2 and the DAS cluster at Vrije.

We hope to have integration ready by the next release of ConPaaS. In the meantime, the ConPaaS team shows that we "eat our own dog food" to celebrate the release of version 1.1 - now the project's web site www.conpaas.eu can rely on its own ConPaaS technologies to host it. Cheers!


Monday, January 28, 2013

Welcome to the FutureGrid Testbed blog!

This FutureGrid Testbed blog provides opportunities for FutureGrid team members to share information about systems and software innovations, research and educational projects, publications, presentations, and other activities related to FutureGrid, an experimental testbed that allows researchers and educators to collaborate on the development and testing of innovative approaches to parallel, grid, and cloud computing.

Want to use FutureGrid?  It's easy to get started.. We welcome new proposals and project approval is fast.  Get started using FutureGrid today.

Active FutureGrid Users:  We'd love to feature your FutureGrid project work and address topics that will facilitate your research and educational use of FutureGrid.  Please let us know what you'd like to see us share in this new blog by leaving a comment here or submitting a ticket through the FutureGrid Portal.




FutureGrid is funded by the National Science Foundation (NSF) and is led by Indiana University with University of Chicago, University of Florida, San Diego Supercomputing Center, Texas Advanced Computing Center, University of Virginia, University of Tennessee, University of Southern California, Dresden, Purdue University, and Grid 5000 as partner sites.  NSF grant no. 0910812.  FutureGrid is a resource provider for XSEDE.