Back in July, Microsoft announced it was making .NET available under its Community Promise, which in theory allowed free software developers to use the technology without fear of patent lawsuits. Not surprisingly, many free software geeks were unconvinced by the promise (after all, what's a promise compared to an actual open licence?), but now Microsoft has taken things to the next level by releasing the .NET Micro Framework under the Apache 2.0 licence. Yes, you read that correctly: a sizeable chunk of .NET is about to go open source.
More than 70 percent of Malaysian government offices are running open source software, according to figures released by the country's Open Source Competency Centre.
The centre was established as part of the 2004 Malaysian Public Sector OSS Master Plan, to guide and co-ordinate the implementation of OSS in the public sector.
The latest OSS adoption figures, released on 24 July, show that 521 of the country's 724 public sector agencies (72 per cent) have adopted OSS. This is a significant increase from 354 agencies (49 percent) in 2008 and 163 (22.5 per cent) in 2007.
Malaysia is certainly raising the bar in terms of open source adoption and leadership!
NOTE : I do not take any responsibilty of any damage to your disk or data while trying my method or any of my commands stated in this article. YOU HAVE BEEN WARNED!!!
- Let's say our corrupted filesystem is at partition /dev/sdb3 of ext3 type. We will mount the partition under /mnt/sdb3, so create the directory structure if you dont have it already.
Also, create the following directory structure to keep backup data.
Note that ext3 filesystem is same as ext2, with only addition of journal. So our entire technique will use ext2 filesystem if even our corrupted filesystem is ext3 type. Because our aim is to recover data not journal recovery (which is unrecoverable as far as I know). So be carefull while you issue any of my commands, unless explicitly told dont add any ext3 filesystem type in any of our command. Use all my command as it is written below.
With the cloud computing wave poised to reach the world market in the next 12 to 18 months, open source software and coding techniques are about to hit the big time.
That’s because open source software and the methodologies that accompany it have already been proven to be the chosen route for the vast majority of companies aiming to capitalise on the cloud phenomenon.
For evidence of this, you need look no further than the route companies such as Amazon, Google and Rackspace have taken in building out the massive datacentres they plan to begin selling capacity on in the coming years.
Discussed in Slashdot quite few days back becomes true. Some researchers had claimed was too theoretical to worry about, has now been demonstrated by exploit. The attack description is available on securegoose.org.
The exploit by Anil Kurmus is significant because it successfully targeted the so-called SSL renegotiation bug to steal Twitter login credentials that passed through encrypted data streams. When the flaw surfaced last week, many researchers dismissed it as an esoteric curiosity with little practical effect.
For one thing, the critics said, the protocol bug was hard to exploit. And for another, they said, even when it could be targeted, it achieved extremely limited results. The skepticism was understandable: While attackers could inject a small amount of text at the beginning of an authenticated SSL session, they were unable to read encrypted data that flowed between the two parties.
A lot of people seem to think that open source is a magic solution to project management and that open source projects will automatically attract a large and healthy community of contributors and users who will improve the software. This, of course, is not the case. In fact, creating a successful open source project is a really major and difficult effort. You have to deliver an initial promise that people find interesting, attract other people, then facilitate and lead the community, etc. You just have to look at all the failed projects on Source
If you're wondering what the folks over at KDE have been cooking up for the next major release, KDE 4.4, well, quite a bit as it turns out. In a lengthy interview, KDE core developer and spokesperson for the project Sebastian Kugler details the myriad changes that are coming with the 4.4 release — the fifth major release since KDE 4.0 debuted to much criticism nearly two years ago. The project has closed about 18,000 bugs over the past six months and the pace of development is snowballing. The 'heavy-lifting' in libraries and frameworks for 4.0 is now starting to pay off. Perhaps the biggest change is in the development of a semantic desktop. According to Kugler, 'If you tag an image in your image viewer, the tag becomes visible in your desktop search. That's how it should be, right?' There is also a picture gallery of KDE 4.4 (svn) screenshots so you can see what it will look like.
As amazing as today's supercomputing systems are, they remain primitive and current designs soak up too much power, space and money. And as big as they are today, supercomputers aren't big enough — a key topic for some of the estimated 11,000 people now gathering in Portland, Ore. for the 22nd annual supercomputing conference, SC09, will be the next performance goal: an exascale system. Today, supercomputers are well short of an exascale. The world's fastest system at Oak Ridge National Laboratory, according to the just released Top500 list, is a Cray XT5 system, which has 224,256 processing cores from six-core Opteron chips made by Advanced Micro Devices Inc. (AMD). The Jaguar is capable of a peak performance of 2.3 petaflops.