Brazils new P2P grid computing.

If you want to talk about your personal paperweight, you've found the right place!

Moderator: Moderators

Brazils new P2P grid computing.

Postby WarraWarra » Fri Jun 08, 2007 3:05

Man this is what I was asking about in November 2006 extra cpu cycle donating to some grid computing system for compiling. Same as Seti at home but just for compiling / weather etc .

O well they have found a way , good job Brazil.

http://www.theregister.co.uk/2007/06/07 ... _p2p_grid/

http://www.ourgrid.org/

We might have to build one for Sabayon to do all the hard work.
WarraWarra
Sagely Hen
 
Posts: 1989
Joined: Wed Jan 03, 2007 21:01
Location: 31.324270, -113.585511

Postby totedati » Fri Jun 08, 2007 9:28

haha ... time to begin a newer ending 'sabayon compiling fest' .... :oops: :oops:

but

Code: Select all
ping 'www.ourgrid.org'
PING dragao.ourgrid.org (150.165.85.73) 56(84) bytes of data.

--- dragao.ourgrid.org ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3998ms


is not working ... maybe is busy compiling ...

and from that presentation do not understand ... is a hierarcy involved, or is a true p2p network were anyboby can begin a 'gentoo compiling fest' .... nedded more info here ... And i think that for linxay and his team the main time consumer is not the compiling part but the never ending tidy-up of portage broken'ess packages ... here you need to spend a lot of time to find, study and aply the right patches for a lot of packages ... here the 'compiling part' is only a bonus of nedded patches scufiness ....
linux is free, the expertise to harness it is not!
you don't make so much money selling open source software!
You make MORE money USING it, just like google!
linux registered user #352479
totedati
Technological Hen
 
Posts: 417
Joined: Thu Jan 11, 2007 0:24
Location: Sibiu, Romania

Postby WarraWarra » Fri Jun 08, 2007 20:34

Okay LOL.

My idea was to get a similar program up and running and then form a grid / cluster within the Sabayon community so they can offload the bug fixing / compiling and or all the stuff that takes up the most time to be done by the grid / cluster / drones LOL , the automated stuff.

Then they can just monitor the work as it happens and leave their own systems to do other work and or work on other problems and fault finding.

Not sure I am using the correct words but you get the idea.

Maybe a remote XEN cluster / WAN cluster ? something to the lines of folding at home for those who has permanent online connections and then donate / dedicate say 50% of cpu / 1 core from dual core cpu to this or when away from PC switch over to 90% for use to cluster ?

So when many of us is at work or away the computers can contribute provided it does not use up the internet connection limits some has.

Damn I can not think of the right words for this now.
WarraWarra
Sagely Hen
 
Posts: 1989
Joined: Wed Jan 03, 2007 21:01
Location: 31.324270, -113.585511

Postby WarraWarra » Fri Jun 08, 2007 20:36

I think they have the grid on another ip as the website loads but ping might be blocked.
WarraWarra
Sagely Hen
 
Posts: 1989
Joined: Wed Jan 03, 2007 21:01
Location: 31.324270, -113.585511

Postby totedati » Sun Jun 10, 2007 21:12

ok ... i understand what you mean .... but is nedded an input from sabayon developers if such thing is nedeed or the main time consumer is in other part of distribution development. My or you can only guess what eat the most time in a sabayon version birth ... But is a good ideea ... Let's not forged it as time past ...

I can confess that my linux box is almost all his time in a very idle state and that my ISP connection bandwith is growing like a popcorn nuke. To solve this problem i do a full openoffice source upgrade and now i plan a full kde upgrade to the latest kde-meta version ( 3.5.7 to this date ). But still a lot of machine cicle is wasted as time past ... with a average 7000 registered sabayon users in sabayon forums look like a good way to increase the eart temperature with a little mili-grades .... i want to eat bananas from my garden .... :oops: :oops: :oops:
linux is free, the expertise to harness it is not!
you don't make so much money selling open source software!
You make MORE money USING it, just like google!
linux registered user #352479
totedati
Technological Hen
 
Posts: 417
Joined: Thu Jan 11, 2007 0:24
Location: Sibiu, Romania

Postby WarraWarra » Mon Jun 11, 2007 3:31

The KDE compile will be fun enjoy LOL might want to start it 6pm and the next morning it should be finished LOL.
Could try it with nodeps and then once complete do a revdep-rebuild to find missing deps might be a bit faster and keep it running thru the night / no stopping on errors.

Imagine if you can log into a website same as with your e-mail and then instruct a remote server by selecting options to compile a custom specific kernel / distro from Sabayon for your pc and once finished just download the iso and install all working 100% and 100% performance tuned for your pc.
This options should also warn you if there might be a conflict or bugs so you can compensate for it before it starts.

Man this would be nice.

I can understand why Redhat had to make rpm packages to shorten the time to install.

I think LX was working on something like this package thingy.

Think the gentoo comunity on the internet will have to join forces and fix the emerge or source to a package type option and have a small file in this generic file / package if you want to do a slow custom compile customized to your machine you just use the same file or run with the generic package.

So same size download using emerge for a generic package but can also customize it or something.

Have to look into the binmerge thingy as I have half a idea about it but think it might be faster than normal compiles not sure. Also I am so used to the emerge funny bussness that I forget to try binmerge.
LOL
WarraWarra
Sagely Hen
 
Posts: 1989
Joined: Wed Jan 03, 2007 21:01
Location: 31.324270, -113.585511

Postby totedati » Mon Jun 11, 2007 22:13

the kde full upgrade is only a step in my test harness of how good is really all this portage stuff ... only a curiosity ... and a way to do something with that waste of processor cicle

After first succesfull openoffice v2.2.0 upgrade ... in other, older, sabayon version, and i guess gentoo also, this was a show-stoper for me ... In kde-meta upgrade i'm very interesed if really is done a kde versioning here, because i'm not interesed by any revdep and dep's options. I just do a 'emerge -av kde-base/kde-meta' and relax ... and when is all done, if is done, i'm curious how is working apps like basket or knemo that look like is not upgraded or recompiled 'by default' .... just curious ....

for the 'grid computing' things looks promising, can be a powerful incentive for ingenious guys like lxnay to try more patch permutations if all the stuff can be transformed in 'realtime' ... but i'm not sure how granular is this things ... distributed to package compile units level, if is true i'm try to avoid a openoffice compile monster to invade my computer, or to filesource compile units level or can be more granular? I do not know ... Is not so clear to me, and still do not find documentation beyond that web page ...

ahh ... i checked again and now 'www.ourgrid.org' is working .... maybe all compiling to planet earth was done ...
linux is free, the expertise to harness it is not!
you don't make so much money selling open source software!
You make MORE money USING it, just like google!
linux registered user #352479
totedati
Technological Hen
 
Posts: 417
Joined: Thu Jan 11, 2007 0:24
Location: Sibiu, Romania

Postby totedati » Tue Jun 26, 2007 0:38

lol ... what i see here http://www.nvidia.com/object/tesla_computing_solutions.html !? up to 128 gpu, used for "supercomputing" in a average desktop computer and a pciexpress slot, TODAY !?!? more than 500 gigflop !? very useful for a planetary p2p compiling cluster ....

look like procesor market is in a erectile state now ....
linux is free, the expertise to harness it is not!
you don't make so much money selling open source software!
You make MORE money USING it, just like google!
linux registered user #352479
totedati
Technological Hen
 
Posts: 417
Joined: Thu Jan 11, 2007 0:24
Location: Sibiu, Romania

Postby totedati » Wed Jun 27, 2007 2:00

another pig flying very high ....
http://www.networkworld.com/community/?q=node/16728
linux is free, the expertise to harness it is not!
you don't make so much money selling open source software!
You make MORE money USING it, just like google!
linux registered user #352479
totedati
Technological Hen
 
Posts: 417
Joined: Thu Jan 11, 2007 0:24
Location: Sibiu, Romania

Postby WarraWarra » Sun Jul 01, 2007 7:00

Azul has a 768 core 768gb ram monster for about us$50k and is a bit more expensive than the Xeon PSC thingy I posted here.

http://www.azulsystems.com/products/com ... liance.htm

You can either get 10 xeon quad core 65w for us$40k or us$10k more the 768core or close to that.

* Azul Systems Compute Appliance 7280 (16 Vega 2 processors, 768 cores): 23,989 TPS
* Sun T2000 (8 cores @ 1.2 GHz): 924 TPS
* 2-socket dual core AMD Opteron 285 (4 cores @ 2.6 GHz): 1095 TPS

http://www.azulsystems.com/products/com ... hrough.htm

* Azul Systems Compute Appliance 7280 (16 Vega 2 processors, 768 cores) performance SAX1: 20,548 TPS, SAX2: 10,670 TPS, SAX3: 5,470 TPS (average SAX: 12,229 TPS)
* Sun T2000 (8 cores @ 1.2 GHz) SAX1: 466 TPS, SAX2: 228 TPS, SAX3: 117 TPS (average SAX: 270 TPS)
* 2-socket dual core AMD Opteron 285 (4 cores @ 2.6 GHz) SAX1: 690 TPS, SAX2: 272 TPS, SAX3: 145 TPS (average SAX: 369 TPS)

http://www.azulsystems.com/products/com ... ompute.htm
2x Sun M9000's
32x Sun T2000's 1.2ghz
24x Amd Opteron 2xcpu dual core servers
1x 7280 Azul 768core 384gb ram

I am impressed for something you can have at home this is awesome.
WarraWarra
Sagely Hen
 
Posts: 1989
Joined: Wed Jan 03, 2007 21:01
Location: 31.324270, -113.585511


Return to Off Topic

Who is online

Users browsing this forum: No registered users and 1 guest

cron