Traditional security systems may be ineffective and become obsolete in warding off Web attacks launched by countries, according to Val Smith, founder of Attack Research. New attack trends include blog spam and SQL injections from Russia and China, Smith said during his talk at the Source Boston Security Showcase on Friday.
"Client-side attacks are where the paradigm is going," Smith said. "Monolithic security systems no longer work."
Hackers use Web browsers as exploitation tools to spread malware and collect sensitive information. Smith used examples from clients of his company, which analyzes and researches computer attacks, to demonstrate the threat posed by blog spam and SQL attacks.
Attackers targeted high-traffic sites with blog spam and posted comments on blogs, he said. The comments looked odd and tended to have non-English phrases placed in large blocks of text with random words hyperlinked, he said. Clicking on such links took users to sites that seemed like blogs but were pages loaded with malware, Smith said.
A Chinese bank owned the domains for each malware site, but the IP (Internet Protocol) addresses traced to Germany. Studying the links revealed that each one contained words in Russian or Romanian, said Smith. By placing an international spin on their nefarious activities, the hackers hoped to confuse anyone investigating their work, he said.
"How are you going to track these back to the bad guys?" he said, noting that tracking is complicated by language barriers, working with foreign law organizations and dealing with countries "that just may not want to talk to us."
While the goals of blog spam attacks remain unclear, Smith said financial incentives serve as motivation. Adware installed after a user visits an infected site nets a hacker money, as does clicking on an advertisement on the page. Other hackers are looking to expand their botnets, or networks of compromised machines used for malevolent purposes.
Smith's investigation traced the attacks to a home DSL account in Russia. The international nature of the incident made prosecution unlikely, he said.
The SQL injection attack Smith discussed originated in China and attempted to steal information on the businesses that visited the Web site of the company, which was Smith's client.
Hackers first launched a SQL injection and uploaded a back door that allowed them to take control of the system.
Additional SQL injections failed, so the hackers searched the system for another exploit. They found a library application that allows images to be uploaded. Hackers uploaded a GIF file with a line of code contained in the image. The computer system read the GIF tag and uploaded the photo and automatically executed the code.
Hackers "targeted an app that is custom-written, in-house, and launched a specific attack against that app," Smith said.
Hackers eventually placed "iFrame" HTML code on every page of the company's Web site. The iFrames redirected the victim's browser to a server that infects the computer using a tool called "MPack." This tool profiled a victim's OS and browser and launched attacks based on that information.
The result is that victims are getting hit with multiple attacks, said Smith.
Today, SQL injection attacks are the top threat to Web security, said Ryan Barnett, director of application security at Breach Security, in an interview separate from the conference.
Last year, cybercriminals began unleashing massive Web attacks that have compromised more than 500,000 Web sites, according to the security vendor.
"They started off in January and went through essentially the whole year," said Barnett. Previously, crafting a SQL injection attack took time, but last year attackers created worm code that could automatically seek out and break into hundreds of thousands of sites very quickly.
Now, instead of stealing data from the hacked Web sites, the bad guys are increasingly turning around and planting malicious scripts that attack the site's visitors. "Now the site is becoming a malware depot," he said.
(Bob McMillan in San Francisco contributed to this report.)
Friday, March 13, 2009
Friday, February 20, 2009
Brocade: Recession Dampens Data-center Trials
The ailing economy is leading some enterprises to put off transforming their data-center networks with emerging technologies such as FCOE (Fibre Channel over Ethernet), Brocade Communications' CTO said Thursday.
IT managers are delaying transitions to converged networks that use a single protocol across both the storage and server areas of a data center, CTO Dave Stevens said in an interview after the company announced a steep increase in revenue for its first fiscal quarter, which ended Jan. 24.
It was the first quarter since Fibre Channel storage network pioneer Brocade acquired Foundry Networks, an Ethernet LAN vendor. FCoE and Converged Enhanced Ethernet (CEE) are two emerging standards designed to combine the strengths of Fibre Channel and Ethernet.
"People are pushing back on trialing converged infrastructure right now," Stevens said. That reflects a greater selectiveness in pursuing IT projects as enterprises move into a mode of buying just what they need, he said.
However, growing network traffic and collections of data, along with requirements to keep data for longer periods, are forcing enterprises to upgrade their networks, he said. In doing so, they are saving money by consolidating ports in fewer platforms, such as large Ethernet switches that can accommodate as many connections as 10 smaller boxes, Stevens said.
"The FCOE stuff and the CEE stuff seem to be pushing out a little bit, and there seems to be more emphasis on the Ethernet side and the Fibre Channel side to implement high-density switching systems in both of those environments," he said.
Brocade reported revenue of $431.6 million for the quarter, up 8 percent from the previous quarter and 24 percent from a year earlier. That figure included about one month of revenue from Foundry, which was folded into the company in late December. It fell short of the consensus forecast of analysts by Thomson Reuters, which was US$441.7 million.
The company posted a loss of US$26 million, or $0.07 per share, because of one-time items that mostly were associated with the Foundry deal, according to Stevens. Not including those items, Brocade earned US$63.6 million or $0.15 per share, exceeding the consensus forecast of analysts by Thomson Reuters, which was $0.13 per share.
Brocade reported the integration of Foundry is ahead of schedule and that "the vast majority" of Foundry employees have remained on board. Brocade has been reorganized to focus on three market segments: Data center infrastructure, campus networks, and service-provider infrastructure, Stevens said. Engineers from both companies are working together on the next generation of technology, such as FCOE gear, but the traditional Fibre Channel and Ethernet product lines will remain and be updated for the foreseeable future, he said.
The biggest challenge in integrating the businesses has been allocating engineers and funding among the Ethernet, Fibre Channel and converged-infrastructure categories, Stevens said.
For fiscal 2009, Brocade predicted IT spending would continue to be held down by economic conditions but start to pick up in the fiscal fourth quarter and the next fiscal year. It forecast annual revenue of $1.9 billion to $2 billion, up from about $1.5 billion in fiscal 2008. But the company sees revenue rising only slightly in the following fiscal year, giving a revenue range for planning purposes of $2.1 billion to $2.2 billion for fiscal 2010.
In after-hours trading late Thursday, Brocade's shares on the Nasdaq (BRCD) were down $0.10 at $3.28.
IT managers are delaying transitions to converged networks that use a single protocol across both the storage and server areas of a data center, CTO Dave Stevens said in an interview after the company announced a steep increase in revenue for its first fiscal quarter, which ended Jan. 24.
It was the first quarter since Fibre Channel storage network pioneer Brocade acquired Foundry Networks, an Ethernet LAN vendor. FCoE and Converged Enhanced Ethernet (CEE) are two emerging standards designed to combine the strengths of Fibre Channel and Ethernet.
"People are pushing back on trialing converged infrastructure right now," Stevens said. That reflects a greater selectiveness in pursuing IT projects as enterprises move into a mode of buying just what they need, he said.
However, growing network traffic and collections of data, along with requirements to keep data for longer periods, are forcing enterprises to upgrade their networks, he said. In doing so, they are saving money by consolidating ports in fewer platforms, such as large Ethernet switches that can accommodate as many connections as 10 smaller boxes, Stevens said.
"The FCOE stuff and the CEE stuff seem to be pushing out a little bit, and there seems to be more emphasis on the Ethernet side and the Fibre Channel side to implement high-density switching systems in both of those environments," he said.
Brocade reported revenue of $431.6 million for the quarter, up 8 percent from the previous quarter and 24 percent from a year earlier. That figure included about one month of revenue from Foundry, which was folded into the company in late December. It fell short of the consensus forecast of analysts by Thomson Reuters, which was US$441.7 million.
The company posted a loss of US$26 million, or $0.07 per share, because of one-time items that mostly were associated with the Foundry deal, according to Stevens. Not including those items, Brocade earned US$63.6 million or $0.15 per share, exceeding the consensus forecast of analysts by Thomson Reuters, which was $0.13 per share.
Brocade reported the integration of Foundry is ahead of schedule and that "the vast majority" of Foundry employees have remained on board. Brocade has been reorganized to focus on three market segments: Data center infrastructure, campus networks, and service-provider infrastructure, Stevens said. Engineers from both companies are working together on the next generation of technology, such as FCOE gear, but the traditional Fibre Channel and Ethernet product lines will remain and be updated for the foreseeable future, he said.
The biggest challenge in integrating the businesses has been allocating engineers and funding among the Ethernet, Fibre Channel and converged-infrastructure categories, Stevens said.
For fiscal 2009, Brocade predicted IT spending would continue to be held down by economic conditions but start to pick up in the fiscal fourth quarter and the next fiscal year. It forecast annual revenue of $1.9 billion to $2 billion, up from about $1.5 billion in fiscal 2008. But the company sees revenue rising only slightly in the following fiscal year, giving a revenue range for planning purposes of $2.1 billion to $2.2 billion for fiscal 2010.
In after-hours trading late Thursday, Brocade's shares on the Nasdaq (BRCD) were down $0.10 at $3.28.
How Will the $7.2 Billion Allotted for Broadband Stimulus Be Spent?
Though a number of details are vague, many people in tech and telecom circles hope that the $7.2 billion allotment for broadband in the newly enacted federal economic stimulus package marks the beginning of a nationwide broadband strategy.
In the American Recovery and Reinvestment Act of 2009, recently enacted by Congress, many details regarding the allocation of funds for high-tech projects remain blurry. Nevertheless, the nation's tech community appears to be encouraged by the $7.2 billion provision for broadband in the near $789 billion economic stimulus package signed into law by President Barack Obama earlier this week. Many observers believe that the allocation is a clear first step toward establishing a nationwide broadband strategy.
Officially known as "Title VI--Broadband Technology Opportunities Program," the $7.2 billion in broadband stimulus money accounts for less than 1 percent (and only five pages) of the entire package. Its purpose is to spur broadband growth in underserved areas of the country.
What the Law Says
The bureaucracy to allocate the money has not been set up yet, and no one can be absolutely sure exactly how the broadband program will work. Still some definite elements have emerged.
First, two entities will issue grants under Title VI: the National Telecommunications & Information Administration (NTIA), and the United States Department of Agriculture (USDA) Rural Utilities Service. Tech companies, telecommunications service providers, and other ISPs large and small will compete for the grant money through a bidding process managed by the two organizations.
But confusion exists even on this point. "There's no clear way to know which government entity they should apply to," says Derek Turner, research director of Free Press, a Washington media-reform think tank.
Urban vs. Rural Broadband
The debate has begun in earnest over how much of the money should go to developing and extending rural broadband service and how much to improving quality and choice in existing urban broadband service. The division of the $7.2 billion between the two agencies provides some clue: The NTIA will be responsible for about $4.7 billion of the money, while USDA will dispense about $2.5 billion of it.
Language in the new law explicitly mentions expanding broadband to rural areas: "The purposes of the program are to (1) provide access to broadband service to consumers residing in underserved areas of the United States; (2) provide improved access to broadband service to consumers residing in underserved areas of the United States."
The law does not define any of those terms, however, nor does it identify the mechanism for issuing funds. Rather, it simply states that "the grant program [will be created] as expeditiously as practicable" and that "if approved, provide the greatest broadband speed possible to the greatest population of users in the area."
The USDA has been operating a Rural Utilities Service since 2002 to help small towns obtain broadband access; but the program, operating with a much smaller budget than the one it will administer under the stimulus act, has achieved only limited success.
We also know something about the timing of the allocations. The new bill states that "all awards are [to be] made before the end of fiscal year 2010."
Many Unknowns in Allocation Plan
While the Obama Administration would like to dole out this money as quickly as possible, many industry experts say that several months--and perhaps a year or more--will pass before any tangible services are up and running. Furthermore, many of the program's details have yet to be determined.
According to Bart Forbes, spokesperson for the National Telecommunications & Information Administration (NTIA), the White House's technology policy arm, and one of main distributors of the new infusion of broadband money, no bureaucratic process is in place yet to move the funds to their needed destinations. "There's no procedure; there's no staff; there's no program," Forbes says. "The key players have not been put into place."
Forbes adds that the NTIA has no permanent head at the moment--and hasn't had one since November 2007. Moreover, the Department of Commerce, of which the NTIA is a component agency, has no secretary either.
Despite these ambiguities, many industry analysts seem hopeful about the broadband initiative's prospects for success. "There's lots of potential for waste, fraud, and abuse [in the new law], but our country is in trouble right now," Turner says. "I'm cautiously optimistic."
How Will It Work?
Once the NTIA and the USDA create a system for distributing stimulus grants, they will work with the various states to outline the states' needs. The resulting proposals could come in the form of wired or wireless projects--the language of the law doesn't specify any particular speed or technology.
Meanwhile, tech companies, nonprofits, and ISPs will submit grant proposals and the Washington, D.C., entities will broker the final arrangements for funding approved proposals.
Each grant must adhere to principles of openness, including generally recognized provisions of Net neutrality, which require an "open access basis."
To counter potential fraud and waste, the law also mandates a "fully searchable database, accessible on the Internet at no cost to the public, that contains at least a list of each entity that has applied for a grant under this section, a description of each application, the status of each such application, the name of each entity receiving funds made available pursuant to this section, the purpose for which such entity is receiving such funds, each quarterly report submitted by the entity pursuant to this section, and such other information sufficient to allow the public to understand and monitor grants awarded under the program."
Will It Ceate Jobs?
Industry watchers say that the new law is crucial if some 20 million Americans are to obtain the broadband Internet access they need.
Craig Settles, president of Successful.com and a longtime telecom industry observer, notes that public discussion of the broadband provision and of the larger stimulus package tends to focus on their similarity to New Deal-era public spending on infrastructure projects; but he says that the parallel is inexact.
"Broadband is as vital as roads and highways, but it isn't as much in the building of the infrastructure as in the job creation that comes out of the more physical, like dams and roads and so forth--those old-school infrastructure projects generate a lot of work," Settles says. "Where you're going to have the greatest impact [with the new projects] is after the network is done. It will draw new businesses to the communities; it will enable the businesses that are there to expand their markets."
What's Next?
In coming weeks, the person appointed as Secretary of Commerce by President Obama will appoint an assistant secretary--and that person will bear primary responsibility for overseeing execution of the provisions of Title VI.
"Over the next 60 days, the Department of Commerce and Department of Agriculture are going to write the [Request for Proposal] that puts the teeth into this bill, and the stipulations that the money gets appropriated to where it's needed and that it's open so it's not just the incumbents that are sucking up the money," Settles says.
Many other industry observers--including Harold Feld, a telecommunications consultant--say that the Obama Administration's attention to broadband indicates its commitment to making technology policy a high priority.
"So far, the Obama people who are going to be running this have shown that they have a drive and an appreciation for what broadband can do to transform people's lives," Feld says. "[Obama] has made a relatively minor part of the stimulus bill something that he talks about in every one of his speeches."
In the American Recovery and Reinvestment Act of 2009, recently enacted by Congress, many details regarding the allocation of funds for high-tech projects remain blurry. Nevertheless, the nation's tech community appears to be encouraged by the $7.2 billion provision for broadband in the near $789 billion economic stimulus package signed into law by President Barack Obama earlier this week. Many observers believe that the allocation is a clear first step toward establishing a nationwide broadband strategy.
Officially known as "Title VI--Broadband Technology Opportunities Program," the $7.2 billion in broadband stimulus money accounts for less than 1 percent (and only five pages) of the entire package. Its purpose is to spur broadband growth in underserved areas of the country.
What the Law Says
The bureaucracy to allocate the money has not been set up yet, and no one can be absolutely sure exactly how the broadband program will work. Still some definite elements have emerged.
First, two entities will issue grants under Title VI: the National Telecommunications & Information Administration (NTIA), and the United States Department of Agriculture (USDA) Rural Utilities Service. Tech companies, telecommunications service providers, and other ISPs large and small will compete for the grant money through a bidding process managed by the two organizations.
But confusion exists even on this point. "There's no clear way to know which government entity they should apply to," says Derek Turner, research director of Free Press, a Washington media-reform think tank.
Urban vs. Rural Broadband
The debate has begun in earnest over how much of the money should go to developing and extending rural broadband service and how much to improving quality and choice in existing urban broadband service. The division of the $7.2 billion between the two agencies provides some clue: The NTIA will be responsible for about $4.7 billion of the money, while USDA will dispense about $2.5 billion of it.
Language in the new law explicitly mentions expanding broadband to rural areas: "The purposes of the program are to (1) provide access to broadband service to consumers residing in underserved areas of the United States; (2) provide improved access to broadband service to consumers residing in underserved areas of the United States."
The law does not define any of those terms, however, nor does it identify the mechanism for issuing funds. Rather, it simply states that "the grant program [will be created] as expeditiously as practicable" and that "if approved, provide the greatest broadband speed possible to the greatest population of users in the area."
The USDA has been operating a Rural Utilities Service since 2002 to help small towns obtain broadband access; but the program, operating with a much smaller budget than the one it will administer under the stimulus act, has achieved only limited success.
We also know something about the timing of the allocations. The new bill states that "all awards are [to be] made before the end of fiscal year 2010."
Many Unknowns in Allocation Plan
While the Obama Administration would like to dole out this money as quickly as possible, many industry experts say that several months--and perhaps a year or more--will pass before any tangible services are up and running. Furthermore, many of the program's details have yet to be determined.
According to Bart Forbes, spokesperson for the National Telecommunications & Information Administration (NTIA), the White House's technology policy arm, and one of main distributors of the new infusion of broadband money, no bureaucratic process is in place yet to move the funds to their needed destinations. "There's no procedure; there's no staff; there's no program," Forbes says. "The key players have not been put into place."
Forbes adds that the NTIA has no permanent head at the moment--and hasn't had one since November 2007. Moreover, the Department of Commerce, of which the NTIA is a component agency, has no secretary either.
Despite these ambiguities, many industry analysts seem hopeful about the broadband initiative's prospects for success. "There's lots of potential for waste, fraud, and abuse [in the new law], but our country is in trouble right now," Turner says. "I'm cautiously optimistic."
How Will It Work?
Once the NTIA and the USDA create a system for distributing stimulus grants, they will work with the various states to outline the states' needs. The resulting proposals could come in the form of wired or wireless projects--the language of the law doesn't specify any particular speed or technology.
Meanwhile, tech companies, nonprofits, and ISPs will submit grant proposals and the Washington, D.C., entities will broker the final arrangements for funding approved proposals.
Each grant must adhere to principles of openness, including generally recognized provisions of Net neutrality, which require an "open access basis."
To counter potential fraud and waste, the law also mandates a "fully searchable database, accessible on the Internet at no cost to the public, that contains at least a list of each entity that has applied for a grant under this section, a description of each application, the status of each such application, the name of each entity receiving funds made available pursuant to this section, the purpose for which such entity is receiving such funds, each quarterly report submitted by the entity pursuant to this section, and such other information sufficient to allow the public to understand and monitor grants awarded under the program."
Will It Ceate Jobs?
Industry watchers say that the new law is crucial if some 20 million Americans are to obtain the broadband Internet access they need.
Craig Settles, president of Successful.com and a longtime telecom industry observer, notes that public discussion of the broadband provision and of the larger stimulus package tends to focus on their similarity to New Deal-era public spending on infrastructure projects; but he says that the parallel is inexact.
"Broadband is as vital as roads and highways, but it isn't as much in the building of the infrastructure as in the job creation that comes out of the more physical, like dams and roads and so forth--those old-school infrastructure projects generate a lot of work," Settles says. "Where you're going to have the greatest impact [with the new projects] is after the network is done. It will draw new businesses to the communities; it will enable the businesses that are there to expand their markets."
What's Next?
In coming weeks, the person appointed as Secretary of Commerce by President Obama will appoint an assistant secretary--and that person will bear primary responsibility for overseeing execution of the provisions of Title VI.
"Over the next 60 days, the Department of Commerce and Department of Agriculture are going to write the [Request for Proposal] that puts the teeth into this bill, and the stipulations that the money gets appropriated to where it's needed and that it's open so it's not just the incumbents that are sucking up the money," Settles says.
Many other industry observers--including Harold Feld, a telecommunications consultant--say that the Obama Administration's attention to broadband indicates its commitment to making technology policy a high priority.
"So far, the Obama people who are going to be running this have shown that they have a drive and an appreciation for what broadband can do to transform people's lives," Feld says. "[Obama] has made a relatively minor part of the stimulus bill something that he talks about in every one of his speeches."
Conficker Worm Gets an Evil Twin
The criminals behind the widespread Conficker worm have released a new version of the malware that could signal a major shift in the way the worm operates.
The new variant, dubbed Conficker B++, was spotted three days ago by SRI International researchers, who published details of the new code on Thursday. To the untrained eye, the new variant looks almost identical to the previous version of the worm, Conficker B. But the B++ variant uses new techniques to download software, giving its creators more flexibility in what they can do with infected machines.
Conficker-infected machines could be used for nasty stuff -- sending spam, logging keystrokes, or launching denial of service (DoS) attacks, but an ad hoc group calling itself the Conficker Cabal has largely prevented this from happening. They've kept Conficker under control by cracking the algorithm the software uses to find one of thousands of rendezvous points on the Internet where it can look for new code. These rendezvous points use unique domain names, such as pwulrrog.org, that the Conficker Cabal has worked hard to register and keep out of the hands of the criminals.
The new B++ variant uses the same algorithm to look for rendezvous points, but it also gives the creators two new techniques that skip them altogether. That means that the Cabal's most successful technique could be bypassed.
Conficker underwent a major rewrite in December, when the B variant was released. But this latest B++ version includes more subtle changes, according to Phil Porras, a program director with SRI. "This is a more surgical set of changes that they've made," he said.
To put things in perspective: There were 297 subroutines in Conficker B; 39 new routines were added in B++ and three existing subroutines were modified, SRI wrote in a report on the new variant. B++ suggests "the malware authors may be seeking new ways to obviate the need for Internet rendezvous points altogether," the report states.
Porras could not say how long Conficker B++ has been in circulation, but it first appeared on Feb. 6, according to a researcher using the pseudonym Jart Armin, who works on the Hostexploit.com Web site, which has tracked Conficker.
Though he does not know whether B++ was created in response to the Cabal's work, "it does make the botnet more robust and it does mitigate some of the Cabal's work," Support Intelligence CEO Rick Wesson said in an e-mail interview.
Also known as Downadup, Conficker spreads using a variety of techniques. It exploits a dangerous Windows bug to attack computers on a local area network, and it can also spread via USB devices such as cameras or storage devices. All variants of Conficker have now infected about 10.5 million computers, according to SRI.
The new variant, dubbed Conficker B++, was spotted three days ago by SRI International researchers, who published details of the new code on Thursday. To the untrained eye, the new variant looks almost identical to the previous version of the worm, Conficker B. But the B++ variant uses new techniques to download software, giving its creators more flexibility in what they can do with infected machines.
Conficker-infected machines could be used for nasty stuff -- sending spam, logging keystrokes, or launching denial of service (DoS) attacks, but an ad hoc group calling itself the Conficker Cabal has largely prevented this from happening. They've kept Conficker under control by cracking the algorithm the software uses to find one of thousands of rendezvous points on the Internet where it can look for new code. These rendezvous points use unique domain names, such as pwulrrog.org, that the Conficker Cabal has worked hard to register and keep out of the hands of the criminals.
The new B++ variant uses the same algorithm to look for rendezvous points, but it also gives the creators two new techniques that skip them altogether. That means that the Cabal's most successful technique could be bypassed.
Conficker underwent a major rewrite in December, when the B variant was released. But this latest B++ version includes more subtle changes, according to Phil Porras, a program director with SRI. "This is a more surgical set of changes that they've made," he said.
To put things in perspective: There were 297 subroutines in Conficker B; 39 new routines were added in B++ and three existing subroutines were modified, SRI wrote in a report on the new variant. B++ suggests "the malware authors may be seeking new ways to obviate the need for Internet rendezvous points altogether," the report states.
Porras could not say how long Conficker B++ has been in circulation, but it first appeared on Feb. 6, according to a researcher using the pseudonym Jart Armin, who works on the Hostexploit.com Web site, which has tracked Conficker.
Though he does not know whether B++ was created in response to the Cabal's work, "it does make the botnet more robust and it does mitigate some of the Cabal's work," Support Intelligence CEO Rick Wesson said in an e-mail interview.
Also known as Downadup, Conficker spreads using a variety of techniques. It exploits a dangerous Windows bug to attack computers on a local area network, and it can also spread via USB devices such as cameras or storage devices. All variants of Conficker have now infected about 10.5 million computers, according to SRI.
Scientists Claim Big Leap in Nanoscale Storage
Nanotechnology researchers say they have achieved a breakthrough that could fit the contents of 250 DVDs on a coin-sized surface and might also have implications for displays and solar cells.
The scientists, from the University of California at Berkeley and the University of Massachusetts Amherst, discovered a way to make certain kinds of molecules line up in perfect arrays over relatively large areas. The results of their work will appear Friday in the journal Science, according to a UC Berkeley press release. One of the researchers said the technology might be commercialized in less than 10 years, if industry is motivated.
More densely packed molecules could mean more data packed into a given space, higher-definition screens and more efficient photovoltaic cells, according to scientists Thomas Russell and Ting Xu. This could transform the microelectronics and storage industries, they said. Russell is director of the Materials Research Science and Engineering Center at Amherst and a visiting professor at Berkeley, and Xu is a Berkeley assistant professor in Chemistry and Materials Sciences and Engineering.
Russell and Xu discovered a new way to create block copolymers, or chemically dissimilar polymer chains that join together by themselves. Polymer chains can join up in a precise pattern equidistant from each other, but research over the past 10 years has found that the patterns break up as scientists try to make the pattern cover a larger area.
Russell and Xu used commercially available, man-made sapphire crystals to guide the polymer chains into precise patterns. Heating the crystals to between 1,300 and 1,500 degrees Celsius (2,372 to 2,732 degrees Fahrenheit) creates a pattern of sawtooth ridges that they used to guide the assembly of the block copolymers. With this technique, the only limit to the size of an array of block copolymers is the size of the sapphire, Xu said.
Once a sapphire is heated up and the pattern is created, the template could be reused. Both the crystals and the polymer chains could be obtained commercially, Xu said.
"Every ingredient we use here is nothing special," Xu said.
The scientists said they achieved a storage density of 10Tb (125GB) per square inch, which is 15 times the density of past solutions, with no defects. With this density, the data stored on 250 DVDs could fit on a surface the size of a U.S. quarter, which is 25.26 millimeters in diameter, the researchers said. It might also be possible to achieve a high-definition picture with 3-nanometer pixels, potentially as large as a stadium JumboTron, Xu said. Another possibility is more dense photovoltaic cells that capture the sun's energy more efficiently.
Russell and Xu's approach differs from how other researchers have been trying to increase storage density. Most have been using optical lithography, which sends light through a mask onto a photosensitive surface. That process creates a pattern to guide the copolymers into assembling.
The new technology could create chip features just 3nm across, far outstripping current microprocessor manufacturing techniques, which at their best create features about 45nm across. Photolithography is running into basic barriers to achieving greater density, and the new approach uses less environmentally harmful chemicals, Xu said. But actually applying the technique to CPUs would pose some challenges, such as the need to create random patterns on a CPU, Xu said.
Among other things, such a leap ahead in storage density could alter either the amount of content that a person could carry with them or the quality of media delivered on discs, said Nathan Brookwood, principal analyst at Insight64. For example, it might allow movies to turn into holograms, he said.
"Just when we think we're so technically sophisticated in what we can do, along comes somebody with a notion like this, which has the potential to fundamentally change economics in so many different areas," Brookwood said.
Ultra-high-definition displays have less practical potential, according to IDC analyst Tom Mainelli. The image and video standards of today, including those used in HDTV, couldn't take advantage of a display with 3nm pixels, he said. And when it comes to monitors, price is king.
"You could see how there would be a value to that level of precision (in an area like medical imaging) ... but are we talking about a [US]$10,000 display?" Mainelli said.
Insight64's Brookwood said the technology, for which Berkeley and Amherst have applied for a patent, harkens back to fundamental breakthroughs that created the IT industry, he said.
"It's this kind of basic materials research that has created the opportunities that have made Silicon Valley and American manufacturing great," Brookwood said. "The last few years (in the U.S.), there have been fewer and fewer people working on this level of basic stuff," he said.
The scientists, from the University of California at Berkeley and the University of Massachusetts Amherst, discovered a way to make certain kinds of molecules line up in perfect arrays over relatively large areas. The results of their work will appear Friday in the journal Science, according to a UC Berkeley press release. One of the researchers said the technology might be commercialized in less than 10 years, if industry is motivated.
More densely packed molecules could mean more data packed into a given space, higher-definition screens and more efficient photovoltaic cells, according to scientists Thomas Russell and Ting Xu. This could transform the microelectronics and storage industries, they said. Russell is director of the Materials Research Science and Engineering Center at Amherst and a visiting professor at Berkeley, and Xu is a Berkeley assistant professor in Chemistry and Materials Sciences and Engineering.
Russell and Xu discovered a new way to create block copolymers, or chemically dissimilar polymer chains that join together by themselves. Polymer chains can join up in a precise pattern equidistant from each other, but research over the past 10 years has found that the patterns break up as scientists try to make the pattern cover a larger area.
Russell and Xu used commercially available, man-made sapphire crystals to guide the polymer chains into precise patterns. Heating the crystals to between 1,300 and 1,500 degrees Celsius (2,372 to 2,732 degrees Fahrenheit) creates a pattern of sawtooth ridges that they used to guide the assembly of the block copolymers. With this technique, the only limit to the size of an array of block copolymers is the size of the sapphire, Xu said.
Once a sapphire is heated up and the pattern is created, the template could be reused. Both the crystals and the polymer chains could be obtained commercially, Xu said.
"Every ingredient we use here is nothing special," Xu said.
The scientists said they achieved a storage density of 10Tb (125GB) per square inch, which is 15 times the density of past solutions, with no defects. With this density, the data stored on 250 DVDs could fit on a surface the size of a U.S. quarter, which is 25.26 millimeters in diameter, the researchers said. It might also be possible to achieve a high-definition picture with 3-nanometer pixels, potentially as large as a stadium JumboTron, Xu said. Another possibility is more dense photovoltaic cells that capture the sun's energy more efficiently.
Russell and Xu's approach differs from how other researchers have been trying to increase storage density. Most have been using optical lithography, which sends light through a mask onto a photosensitive surface. That process creates a pattern to guide the copolymers into assembling.
The new technology could create chip features just 3nm across, far outstripping current microprocessor manufacturing techniques, which at their best create features about 45nm across. Photolithography is running into basic barriers to achieving greater density, and the new approach uses less environmentally harmful chemicals, Xu said. But actually applying the technique to CPUs would pose some challenges, such as the need to create random patterns on a CPU, Xu said.
Among other things, such a leap ahead in storage density could alter either the amount of content that a person could carry with them or the quality of media delivered on discs, said Nathan Brookwood, principal analyst at Insight64. For example, it might allow movies to turn into holograms, he said.
"Just when we think we're so technically sophisticated in what we can do, along comes somebody with a notion like this, which has the potential to fundamentally change economics in so many different areas," Brookwood said.
Ultra-high-definition displays have less practical potential, according to IDC analyst Tom Mainelli. The image and video standards of today, including those used in HDTV, couldn't take advantage of a display with 3nm pixels, he said. And when it comes to monitors, price is king.
"You could see how there would be a value to that level of precision (in an area like medical imaging) ... but are we talking about a [US]$10,000 display?" Mainelli said.
Insight64's Brookwood said the technology, for which Berkeley and Amherst have applied for a patent, harkens back to fundamental breakthroughs that created the IT industry, he said.
"It's this kind of basic materials research that has created the opportunities that have made Silicon Valley and American manufacturing great," Brookwood said. "The last few years (in the U.S.), there have been fewer and fewer people working on this level of basic stuff," he said.
Monday, February 16, 2009
Google, Nvidia Bringing Android to Tegra Chips
Nvidia on Monday said it is working with Google to build support for Linux applications on smartphones with its upcoming Tegra mobile chips.
The company has allied with Google and the Open Handset Alliance to support the open-source Android software stack, which is increasingly being adopted by smartphone makers including Samsung and HTC.
Primarily known as a graphics card vendor, Nvidia said Tegra chips would bring advanced graphics capabilities to smartphones while drawing less power.
The support for the Android platform is an attempt to drive up Tegra's adoption among smartphone makers. Nvidia is displaying an Android-based phone with a Tegra chip at the GSMA Mobile World Congress being held in Barcelona from Monday to Thursday.
Tegra-based phones will combine advanced graphics, better battery life and always-on Internet access, Nvidia said in a press release. Smartphone makers can now use the Android platform to build Web 2.0 and Internet-based applications for Tegra-based smartphones, the company said.
Tegra chips put an Arm-based processor core, a GeForce graphics core and other components on a single chip. The product lineup includes the Tegra 600 running at 700MHz and Tegra 650 running at 800MHz. It also includes Tegra APX 2500 and APX 2600.
The systems-on-chips will start shipping in mid-2009 for handheld devices like smartphones and mobile Internet devices. Nvidia couldn't immediately name companies that may ship smartphones with the chips. However, an analyst last week speculated that Microsoft would launch a smartphone with Tegra's APX 2600 chip at MWC.
Beyond open-source support, Tegra chips also support Windows-based applications. At last year's MWC, Nvidia announced Tegra would support Windows Mobile and enable 3D user interfaces and high-definition video on smartphones.
Nvidia also wants to help bring about mobile Internet devices (MIDs) for US$100 with Tegra chips. Mobile Internet devices are handheld communication and Internet devices that fall somewhere between a sub-notebook and a smartphone.
A $99 Tegra-based MID is expected to be announced by Nvidia at MWC. The MID includes full high-definition 1080p video playback and full Wi-Fi and 3G mobile broadband connectivity capabilities. The always-on device can go "days" between battery charges, a company spokesman said.
Other than saying similar MIDs would ship in the second half, the company provided no further details about the product.
The company has allied with Google and the Open Handset Alliance to support the open-source Android software stack, which is increasingly being adopted by smartphone makers including Samsung and HTC.
Primarily known as a graphics card vendor, Nvidia said Tegra chips would bring advanced graphics capabilities to smartphones while drawing less power.
The support for the Android platform is an attempt to drive up Tegra's adoption among smartphone makers. Nvidia is displaying an Android-based phone with a Tegra chip at the GSMA Mobile World Congress being held in Barcelona from Monday to Thursday.
Tegra-based phones will combine advanced graphics, better battery life and always-on Internet access, Nvidia said in a press release. Smartphone makers can now use the Android platform to build Web 2.0 and Internet-based applications for Tegra-based smartphones, the company said.
Tegra chips put an Arm-based processor core, a GeForce graphics core and other components on a single chip. The product lineup includes the Tegra 600 running at 700MHz and Tegra 650 running at 800MHz. It also includes Tegra APX 2500 and APX 2600.
The systems-on-chips will start shipping in mid-2009 for handheld devices like smartphones and mobile Internet devices. Nvidia couldn't immediately name companies that may ship smartphones with the chips. However, an analyst last week speculated that Microsoft would launch a smartphone with Tegra's APX 2600 chip at MWC.
Beyond open-source support, Tegra chips also support Windows-based applications. At last year's MWC, Nvidia announced Tegra would support Windows Mobile and enable 3D user interfaces and high-definition video on smartphones.
Nvidia also wants to help bring about mobile Internet devices (MIDs) for US$100 with Tegra chips. Mobile Internet devices are handheld communication and Internet devices that fall somewhere between a sub-notebook and a smartphone.
A $99 Tegra-based MID is expected to be announced by Nvidia at MWC. The MID includes full high-definition 1080p video playback and full Wi-Fi and 3G mobile broadband connectivity capabilities. The always-on device can go "days" between battery charges, a company spokesman said.
Other than saying similar MIDs would ship in the second half, the company provided no further details about the product.
Adobe to Show off New Flash for Smartphones
At the Mobile World Congress on Monday, Adobe plans to show off progress on its Flash Player 10 for smartphones and deliver a new software development kit that should make reading documents on small screens easier.
While Adobe has demonstrated Flash Player 10 on the Android G1, at MWC it will also show it running on Nokia S60 and Windows Mobile phones. While Flash Player 10 won't display absolutely everything developed for the Web, even on high-end smartphones, it will come closer than its predecessors, said Anup Muraka, director of technology strategy and partner development in Adobe's platform business unit.
Muraka couldn't add any more details about the possibility of Flash in either form on iPhones, a question that many of the phone's users have wondered about. "I can reiterate what our CEO recently said, that we'll continue our development efforts. There's a fair bit of work to be done, and we're looking forward to completing that and coordinating with Apple to try to make it available," he said.
Adobe also planned to announce that it released a new Adobe Reader Mobile SDK that will replace Reader LE 2.5, the current mobile PDF reader. Licensees will use the new SDK to enable the display of PDF documents in their own readers. Reader LE 2.5 is slightly less flexible, requiring licensees to use an included reader.
The new SDK will fit text to the screen rather than display documents in their full size. "In the existing reader, you have to zoom in and pan around," Muraka said.
Sony is already using the technology in its Reader Digital Book, and e-book readers from Bookeen and iRex Technologies as well as Lexcycle, the maker of the iPhone Stanza book reader, plan to use it.
For developers, Adobe introduced new technology that will automatically detect if users buying their applications have Flash Lite, and if they don't, offer to install it. "A developer no longer has to be dependent on whether a consumer has the latest device or software," said Muraka. The distributable player is now available as a beta.
Adobe will also use Mobile World Congress to push its Open Screen Project, an industry initiative that aims to make it easier for content providers to offer a consistent experience to users across devices including TVs, computers and phones. Nokia and Adobe announced that they plan to award US$10 million to developers who build applications that are based on Adobe Flash and will run on Nokia phones plus other kinds of devices. Developers will submit concepts for their applications, and a group of companies including Adobe and Nokia will review them and decide to award them seed money.
While Adobe has demonstrated Flash Player 10 on the Android G1, at MWC it will also show it running on Nokia S60 and Windows Mobile phones. While Flash Player 10 won't display absolutely everything developed for the Web, even on high-end smartphones, it will come closer than its predecessors, said Anup Muraka, director of technology strategy and partner development in Adobe's platform business unit.
Muraka couldn't add any more details about the possibility of Flash in either form on iPhones, a question that many of the phone's users have wondered about. "I can reiterate what our CEO recently said, that we'll continue our development efforts. There's a fair bit of work to be done, and we're looking forward to completing that and coordinating with Apple to try to make it available," he said.
Adobe also planned to announce that it released a new Adobe Reader Mobile SDK that will replace Reader LE 2.5, the current mobile PDF reader. Licensees will use the new SDK to enable the display of PDF documents in their own readers. Reader LE 2.5 is slightly less flexible, requiring licensees to use an included reader.
The new SDK will fit text to the screen rather than display documents in their full size. "In the existing reader, you have to zoom in and pan around," Muraka said.
Sony is already using the technology in its Reader Digital Book, and e-book readers from Bookeen and iRex Technologies as well as Lexcycle, the maker of the iPhone Stanza book reader, plan to use it.
For developers, Adobe introduced new technology that will automatically detect if users buying their applications have Flash Lite, and if they don't, offer to install it. "A developer no longer has to be dependent on whether a consumer has the latest device or software," said Muraka. The distributable player is now available as a beta.
Adobe will also use Mobile World Congress to push its Open Screen Project, an industry initiative that aims to make it easier for content providers to offer a consistent experience to users across devices including TVs, computers and phones. Nokia and Adobe announced that they plan to award US$10 million to developers who build applications that are based on Adobe Flash and will run on Nokia phones plus other kinds of devices. Developers will submit concepts for their applications, and a group of companies including Adobe and Nokia will review them and decide to award them seed money.
Subscribe to:
Posts (Atom)