Popular Posts Today

Diberdayakan oleh Blogger.

Landing pads being designed for extraterrestrial missions

Written By empapat on Kamis, 20 September 2012 | 07.35

ScienceDaily (Sep. 20, 2012) — When the Mars Science Laboratory's Curiosity rover landed on Aug. 6, it was another step forward in the effort to eventually send humans to the Red Planet. Using the lessons of the Apollo era and robotic missions to Mars, NASA scientists and engineers are studying the challenges and hazards involved in any extraterrestrial landing.

The technology is known as "vertical takeoff-vertical landing." According to a group working in NASA's Engineering and Technology Directorate at the Kennedy Space Center in Florida, the best approach requires a landing pad already be in place.

"One of the greatest challenges to Apollo astronauts landing on the moon was dust, rocks and debris obscuring their vision during the final part of the descent," said Rob Mueller, a senior technologist in Kennedy's Surface Systems Office and Lunar Destination co-lead for NASA's Human Spaceflight Architecture Team. "When the Apollo lunar modules reached the 30-meter point (about 100 feet), the dust was like a fog making it difficult to see their landing site. Similarly, photographs show there were some rocks and dust kicked up by the rocket engines on the sky-crane lowering the Curiosity lander onto the Martian surface."

As the Mars Science Laboratory's descent stage used rocket engines to hover, its sky crane lowered the Curiosity rover with a 25-foot tether to a soft landing on the surface.

Mueller and others are working on ways to develop landing pads that could be robotically constructed in advance of future human expeditions to destinations such as the moon or Mars. These specially constructed landing sites could greatly reduce the potential for blowing debris and improve safety for astronauts who make the trip to Mars or another destination.

"Our best estimates indicate that descent engines of the Apollo landers were ejecting up to one-and-a-half tons of rocks and soil," said Dr. Phil Metzger, a research physicist in Kennedy's Granular Mechanics and Regolith Operations Laboratory. "It will be even more challenging when we land humans on Mars. The rocket exhaust will dig a deep hole under the lander and fluidize the soil. We don't know any way to make this safe without landing pads."

Building a landing site in advance of human arrival is part of the plan.

"Robotic landers would go to a location on Mars and excavate a site, clearing rocks, leveling and grading an area and then stabilizing the regolith to withstand impact forces of the rocket plume," Mueller said. "Another option is to excavate down to bedrock to give a firm foundation. Fabric or other geo-textile material could also be used to stabilize the soil and ensure there is a good landing site."

Metzger explained that one of the ways to ensure an on-target landing would be to have robotic rovers place homing beacons around the site.

"Tracking and homing beacons would help a spacecraft reach the specific spot where the landing pad had been constructed," he said.

Landing pad technology may be perfected on Earth well in advance of its use elsewhere in the solar system.

"Several commercial space companies are already discussing returning rocket stages to Kennedy or Cape Canaveral saving on the cost of sending payloads to low Earth orbit," Mueller said. "Rather than the first stage simply falling into the ocean, the rocket would land vertically back here at the Cape to be reused."

While landing pads will provide a smooth touchdown location, they will also require advanced technology design and decisions on how large the landing pad should be.

"One of the factors we have to consider is the atmosphere where a landing will take place," Metzger said. "The Earth has a dense atmosphere that focuses the rocket exhaust onto the ground, but also reduces how far the ejected material is dispersed. Mars, on the other hand, has an atmospheric density that is 1 percent that of Earth. It still focuses the plume into a narrow jet that digs into the soil, but it provides less drag so the ejected soil will actually travel farther.

"Then compare that to the moon with no atmosphere," he said. "The plume won't be focused so it won't dig a deep hole in the soil, but the ejected material will travel vast distances at high velocity. It is like a sandblaster on steroids. So the requirements for a landing pad are determined by the destination we're landing on."

Metzger envisions circular landing pads from about 50 to 100 meters (about 165 to 330 feet) in diameter.

"The specialized material taking the heat of the engine plume would be in the middle," he said. "The area surrounding the center would be designed to hold up support equipment."

Another issue is what substances to use in building the landing pads.

"Tests with prototype landers show that while pads are safer than touching down on natural surfaces, certain pad materials can produce debris of their own," Metzger said. "A supersonic rocket exhaust becomes extremely hot when it impacts a surface. Asphalt or concrete are out of the question because the temperature causes those materials to break apart, throwing chunks of material in all directions."

During investigations of prototype landers, various materials have been examined on the pads from which the vehicles have vertically taken off and landed.

"We've tested several types of materials and it seems that basalt regolith mixed with polymer binders hold up well," Metzger said.

However, the one substance for landing pads that shows the most promise is the material used on spacecraft heat shields.

"Of all the substances we studied, ablative materials seem to work best," Metzger said.

Ablative substances were used on the heat shields for spacecraft during Mercury, Gemini and Apollo. The heat of re-entry was dissipated by burning off successive layers.

"While ablative materials seem to work well, the layers will eventually all burn away," Mueller said. "So next we may try reusable thermal protection material similar to that used on the space shuttle tiles or the Orion capsules."

A human expedition to Mars is still many years away, but Mueller says now is the time to start planning for how to land on another planet.

"The technology we envision will take 10 to 15 years to develop," he said. "We need to begin verifying that these concepts will work, and that's why we are already involved in the research."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by NASA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

20 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/mAtm2NEa1QY/120920101035.htm
--
Manage subscription | Powered by rssforward.com
07.35 | 0 komentar | Read More

New airport system facilitates smoother take-offs and landings

Written By empapat on Rabu, 19 September 2012 | 20.15

ScienceDaily (Sep. 19, 2012) — For airline passengers who dread bumpy rides to mountainous destinations, help may be on the way. A new turbulence avoidance system has for the first time been approved for use at a U.S. airport and can be adapted for additional airports in rugged settings across the United States and overseas.

The system, developed by the National Center for Atmospheric Research (NCAR), provides information pilots can use to route aircraft away from patches of potentially dangerous turbulence. It uses a network of wind measuring instruments and computational formulas to interpret rapidly changing atmospheric conditions.

The Federal Aviation Administration formally commissioned the system in July for Alaska's Juneau International Airport. NCAR researchers can now turn their attention to adapting the system to other airports that often have notoriously severe turbulence, in areas ranging from southern California and the Mountain West to Norway and New Zealand.

The Juneau system was patterned after a similar system, also designed by NCAR, that has guided aircraft for several years at Hong Kong's heavily trafficked Chek Lap Kok Airport.

"By alerting pilots to areas of moderate and severe turbulence, this system enables them to fly more frequently and safely in and out of the Juneau airport in poor weather," says Alan Yates, an NCAR program manager who helped oversee the system's development. "It allows pilots to plan better routes, helping to reduce the bumpy rides that passengers have come to associate with airports in these mountainous settings."

The system offers the potential to substantially reduce flight delays. In Alaska's capital city, where it is known as the Juneau Airport Wind System or JAWS, it enables the airport to continue operations even during times of turbulence by highlighting corridors of smooth air for safe take-offs and landings.

"The JAWS system has nearly eliminated all the risk of flying in and out of Juneau," says Ken Williams, a Boeing 737 captain and instructor pilot with Alaska Airlines. "I wish the system would be deployed in other airports where there are frequent encounters with significant turbulence, so pilots can get a true understanding of what the actual winds are doing on the surrounding mountainous terrain as you approach or depart."

The project was funded by the Federal Aviation Administration. NCAR is sponsored by the National Science Foundation.

Steep terrain, rough rides

Turbulence has long been a serious concern for pilots approaching and departing airports in steep terrain. Rugged peaks can break up air masses and cause complex and rapidly changing patterns of updrafts and downdrafts, buffeting an aircraft or even causing it to unexpectedly leave its planned flight path.

In Juneau, after several turbulence-related incidents in the early 1990s -- including one in which a jet was flipped on its side during flight and narrowly avoided an accident -- the FAA imposed strict rules of operation that effectively shut down the airport during times of atmospheric disturbance. The agency then asked NCAR to develop a system that would allow pilots to avoid regions of turbulence. Otherwise, Alaska's capital would be isolated at many times from the rest of the state, since the only way to travel in and out of Juneau is by airplane or boat.

The NCAR team used research aircraft and computer simulations to determine how different wind patterns -- such as winds that come from the north over mountains and glaciers and winds that come from the southeast over water -- correlated with specific areas of turbulence near the airport. To do this they installed anemometers and wind profilers at key sites along the coast and on mountain ridges. The team has installed ruggedized, heated instruments that can keep functioning even when exposed to extreme cold, wind, and heavy icing conditions.

The Federal Aviation Administration accepted JAWS for operational use this year. The five anemometer sites and three wind profiler sites around the airport transmit data multiple times every minute. Pilots can get near-real-time information about wind speed and direction, and a visual readout showing regions of moderate and severe turbulence in the airport's approach and departure corridors, from the FAA's Flight Service Station or online at a National Weather Service website.

"Juneau was an extremely challenging case, and we're pleased that the new system met the FAA's high standards," Yates says. "We look forward to exploring opportunities to support development of turbulence avoidance systems at additional airports. Our goal is to improve flying safety and comfort for millions of passengers."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by National Center for Atmospheric Research (NCAR).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

20 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/Zplo-Qv2_wA/120919124852.htm
--
Manage subscription | Powered by rssforward.com
20.15 | 0 komentar | Read More

Protecting our harbors and ships with a robotic tuna fish

ScienceDaily (Sep. 19, 2012) — No question about it… they're very good at what they do. But they don't take well to orders, especially those to carry out inspection work in oily or dangerous environments, or in any kind of harsh environment, for that matter. Still, they're one of the fastest and most maneuverable creatures on the planet, having extraordinary abilities at both high and low speeds due to their streamlined bodies and a finely tuned muscular/sensory/control system.

This impressive creature is the humble tuna fish.

The Department of Homeland Security's (DHS) Science and Technology Directorate (S&T) is funding the development of an unmanned underwater vehicle designed to resemble a tuna, called the BIOSwimmer™. Why the tuna? Because the tuna has a natural body framework ideal for unmanned underwater vehicles (UUVs), solving some of the propulsion and maneuverability problems that plague conventional UUVs.

Inspired by the real tuna, BIOSwimmer™ is a UUV designed for high maneuverability in harsh environments, with a flexible aft section and appropriately placed sets of pectoral and other fins. For those cluttered and hard-to-reach underwater places where inspection is necessary, the tuna-inspired frame is an optimal design. It can inspect the interior voids of ships such as flooded bilges and tanks, and hard to reach external areas such as steerage, propulsion and sea chests. It can also inspect and protect harbors and piers, perform area searches and carry out other security missions.

Boston Engineering Corporation's Advanced Systems Group (ASG) in Waltham, Massachusetts, is developing the BIOSwimmer™ for Homeland Security's Science and Technology Directorate. "It's designed to support a variety of tactical missions and with its interchangeable sensor payloads and reconfigurable Operator Controls, and can be optimized on a per-mission basis," says the Director of ASG, Mike Rufo.

BIOSwimmer™ is battery-powered and designed for long-duration operation. Like other unmanned underwater vehicles, it uses an onboard computer suite for navigation, sensor processing, and communications. Its Operator Control Unit is laptop-based and provides intuitive control and simple, mission-defined versatility for the user. A unique aspect of this system is the internal components and external sensing which are designed for the challenging environment of constricted spaces and high viscosity fluids

"It's all about distilling the science," says David Taylor, program manager for the BIOSwimmer™ in S&T's Borders and Maritime Security Division. "It's called 'biomimetics.' We're using nature as a basis for design and engineering a system that works exceedingly well.

Tuna have had millions of years to develop their ability to move in the water with astounding efficiency. Hopefully we won't take that long."

Background

Biologically inspired robotics (biomimetic robotry) is a fairly new science that is gaining steam. There are now robotic lobsters, flies, geckos, moths, clams, dogs, and even a lamprey-like robot, all being designed to perform a variety of missions including surveillance and search and rescue. Robotics based on sinuous snakes and elephant trunks, for example, may be the ideal way to search for survivors inside the rubble of structures destroyed by explosions or natural disasters.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by Homeland Security's Science & Technology Directorate, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

20 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/y3Nxn9ozYEk/120919190102.htm
--
Manage subscription | Powered by rssforward.com
17.47 | 0 komentar | Read More

NASA Mars rover targets unusual rock enroute to first destination

ScienceDaily (Sep. 19, 2012) — NASA's Mars rover Curiosity has driven up to a football-size rock that will be the first for the rover's arm to examine.

Curiosity is about 8 feet (2.5 meters) from the rock. It lies about halfway from the rover's landing site, Bradbury Landing, to a location called Glenelg. In coming days, the team plans to touch the rock with a spectrometer to determine its elemental composition and use an arm-mounted camera to take close-up photographs.

Both the arm-mounted Alpha Particle X-Ray Spectrometer and the mast-mounted, laser-zapping Chemistry and Camera Instrument will be used for identifying elements in the rock. This will allow cross-checking of the two instruments.

The rock has been named "Jake Matijevic." Jacob Matijevic (mah-TEE-uh-vik) was the surface operations systems chief engineer for Mars Science Laboratory and the project's Curiosity rover. He passed away Aug. 20, at age 64. Matijevic also was a leading engineer for all of the previous NASA Mars rovers: Sojourner, Spirit and Opportunity.

Curiosity now has driven six days in a row. Daily distances range from 72 feet to 121 feet (22 meters to 37 meters).

"This robot was built to rove, and the team is really getting a good rhythm of driving day after day when that's the priority," said Mars Science Laboratory Project Manager Richard Cook of NASA's Jet Propulsion Laboratory in Pasadena, Calif.

The team plans to choose a rock in the Glenelg area for the rover's first use of its capability to analyze powder drilled from interiors of rocks. Three types of terrain intersect in the Glenelg area -- one lighter-toned and another more cratered than the terrain Curiosity currently is crossing. The light-toned area is of special interest because it retains daytime heat long into the night, suggesting an unusual composition.

"As we're getting closer to the light-toned area, we see thin, dark bands of unknown origin," said Mars Science Laboratory Project Scientist John Grotzinger of the California Institute of Technology, Pasadena. "The smaller-scale diversity is becoming more evident as we get closer, providing more potential targets for investigation."

Researchers are using Curiosity's Mast Camera (Mastcam) to find potential targets on the ground. Recent new images from the rover's camera reveal dark streaks on rocks in the Glenelg area that have increased researchers' interest in the area. In addition to taking ground images, the camera also has been busy looking upward.

On two recent days, Curiosity pointed the Mastcam at the sun and recorded images of Mars' two moons, Phobos and Deimos, passing in front of the sun from the rover's point of view. Results of these transit observations are part of a long-term study of changes in the moons' orbits. NASA's twin Mars Exploration Rovers, Spirit and Opportunity, which arrived at Mars in 2004, also have observed solar transits by Mars' moons. Opportunity is doing so again this week.

"Phobos is in an orbit very slowly getting closer to Mars, and Deimos is in an orbit very slowly getting farther from Mars," said Curiosity's science team co-investigator Mark Lemmon of Texas A&M University, College Station. "These observations help us reduce uncertainty in calculations of the changes."

In Curiosity's observations of Phobos this week, the time when the edge of the moon began overlapping the disc of the sun was predictable to within a few seconds. Uncertainty in timing is because Mars' interior structure isn't fully understood.

Phobos causes small changes to the shape of Mars in the same way Earth's moon raises tides. The changes to Mars' shape depend on the Martian interior which, in turn, cause Phobos' orbit to decay. Timing the orbital change more precisely provides information about Mars' interior structure.

During Curiosity's two-year prime mission, researchers will use the rover's 10 science instruments to assess whether the selected field site inside Gale Crater ever has offered environmental conditions favorable for microbial life.

For more about Curiosity, visit: http://www.nasa.gov/msl and http://mars.jpl.nasa.gov/msl. You can follow the mission on Facebook and Twitter at: http://www.facebook.com/marscuriosity and http://www.twitter.com/marscuriosity.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by NASA/Jet Propulsion Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

20 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/y4WTxiUu07w/120919201958.htm
--
Manage subscription | Powered by rssforward.com
17.47 | 0 komentar | Read More

Revolutionary ultrathin, flat lens: Smartphones as thin as a credit card?

ScienceDaily (Sep. 19, 2012) — Scientists are reporting development of a revolutionary new lens -- flat, distortion-free, so small that more than 1,500 would fit across the width of a human hair -- capable in the future of replacing lenses in applications ranging from cell phones to cameras to fiber-optic communication systems. The advance, which could lead to smart phones as thin as a credit card, appears in ACS' journal Nano Letters.

Federico Capasso and colleagues explain that the lenses used to focus light in eyeglasses, microscopes and other products use the same basic technology dating to the late 1200s, when spectacle lenses were introduced in Europe. Existing lenses are not thin or flat enough to remove distortions, such as spherical aberration, astigmatism and coma, which prevent the creation of a sharp image. Correction of those distortions requires complex solutions, such as multiple lenses that increase weight and take up space. To overcome these challenges, the scientists sought to develop a new superthin, flat lens.

Although the new lens is ultra-thin, it has a resolving power that actually approaches the theoretical limits set by the laws of optics. The lens surface is patterned with tiny metallic stripes which bend light differently as one moves away from the center, causing the beam to sharply focus without distorting the images. The current version of the lens works at a specific design wavelength, but the scientists say it can be redesigned for use with broad-band light.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by American Chemical Society.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. Francesco Aieta, Patrice Genevet, Mikhail A. Kats, Nanfang Yu, Romain Blanchard, Zeno Gaburro, Federico Capasso. Aberration-Free Ultrathin Flat Lenses and Axicons at Telecom Wavelengths Based on Plasmonic Metasurfaces. Nano Letters, 2012; 12 (9): 4932 DOI: 10.1021/nl302516v

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

20 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/IPo8vDKazkQ/120919125606.htm
--
Manage subscription | Powered by rssforward.com
13.35 | 0 komentar | Read More

Thermoelectric material is the best at converting heat waste to electricity

ScienceDaily (Sep. 19, 2012) — Northwestern University scientists have developed a thermoelectric material that is the best in the world at converting waste heat to electricity. This is very good news once you realize nearly two-thirds of energy input is lost as waste heat.

The material could signify a paradigm shift. The inefficiency of current thermoelectric materials has limited their commercial use. Now, with a very environmentally stable material that is expected to convert 15 to 20 percent of waste heat to useful electricity, thermoelectrics could see more widespread adoption by industry.

Possible areas of application include the automobile industry (much of gasoline's potential energy goes out a vehicle's tailpipe), heavy manufacturing industries (such as glass and brick making, refineries, coal- and gas-fired power plants) and places were large combustion engines operate continuously (such as in large ships and tankers).

Waste heat temperatures in these areas can range from 400 to 600 degrees Celsius (750 to 1,100 degrees Fahrenheit), the sweet spot for thermoelectrics use.

The new material, based on the common semiconductor lead telluride, is the most efficient thermoelectric material known. It exhibits a thermoelectric figure of merit (so-called "ZT") of 2.2, the highest reported to date. Chemists, physicists, material scientists and mechanical engineers at Northwestern and Michigan State University collaborated to develop the material.

The study will be published Sept. 20 by the journal Nature.

"Our system is the top-performing thermoelectric system at any temperature," said Mercouri G. Kanatzidis, who led the research and is a senior author of the paper. "The material can convert heat to electricity at the highest possible efficiency. At this level, there are realistic prospects for recovering high-temperature waste heat and turning it into useful energy."

Kanatzidis is Charles E. and Emma H. Morrison Professor of Chemistry in Northwestern's Weinberg College of Arts and Sciences. He also holds a joint appointment at Argonne National Laboratory.

"People often ask, what is the energy solution?" said Vinayak P. Dravid, one of Kanatzidis' close collaborators. "But there is no unique solution -- it's going to be a distributed solution. Thermoelectrics is not the answer to all our energy problems, but it is an important part of the equation."

Dravid is the Abraham Harris Professor of Materials Science and Engineering at the McCormick School of Engineering and Applied Science and a senior author of the paper.

Other members of the team and authors of the Nature paper include Kanishka Biswas, a postdoctoral fellow in Kanatzidis' group; Jiaqing He, a postdoctoral member in Dravid's group; David N. Seidman, Walter P. Murphy Professor of Materials Science and Engineering at Northwestern; and Timothy P. Hogan, professor of electrical and computer engineering, at Michigan State University.

Even before the Northwestern record-setting material, thermoelectric materials were starting to get better and being tested in more applications. The Mars rover Curiosity is powered by lead telluride thermoelectrics (although it's system has a ZT of only 1, making it half as efficient as Northwestern's system), and BMW is testing thermoelectrics in its cars by harvesting heat from the exhaust system.

"Now, having a material with a ZT greater than two, we are allowed to really think big, to think outside the box," Dravid said. "This is an intellectual breakthrough."

"Improving the ZT never stops -- the higher the ZT, the better," Kanatzidis said. "We would like to design even better materials and reach 2.5 or 3. We continue to have new ideas and are working to better understand the material we have."

The efficiency of waste heat conversion in thermoelectrics is governed by its figure of merit, or ZT. This number represents a ratio of electrical conductivity and thermoelectric power in the numerator (which need to be high) and thermal conductivity in the denominator (which needs to be low).

"It is hard to increase one without compromising the other," Dravid said. These contradictory requirements stalled the progress towards a higher ZT for many years, where it was stagnant at a nominal value of 1.

Kanatzidis and Dravid have pushed the ZT higher and higher in recent years by introducing nanostructures in bulk thermoelectrics. In January 2011, they published a report in Nature Chemistry of a thermoelectric material with a ZT of 1.7 at 800 degrees Kelvin. This was the first example of using nanostructures (nanocrystals of rock-salt structured strontium telluride) in lead telluride to reduce electron scattering and increase the energy conversion efficiency of the material.

The performance of the new material reported now in Nature is nearly 30 percent more efficient than its predecessor. The researchers achieved this by scattering a wider spectrum of phonons, across all wavelengths, which is important in reducing thermal conductivity.

"Every time a phonon is scattered the thermal conductivity gets lower, which is what we want for increased efficiency," Kanatzidis said.

A phonon is a quantum of vibrational energy, and each has a different wavelength. When heat flows through a material, a spectrum of phonons needs to be scattered at different wavelengths (short, intermediate and long).

In this work, the researchers show that all length scales can be optimized for maximum phonon scattering with minor change in electrical conductivity. "We combined three techniques to scatter short, medium and long wavelengths all together in one material, and they all work simultaneously," Kanatzidis said. "We are the first to scatter all three at once and at the widest spectrum known. We call this a panoscopic approach that goes beyond nanostructuring."

"It's a very elegant design," Dravid said.

In particular, the researchers improved the long-wavelength scattering of phonons by controlling and tailoring the mesoscale architecture of the nanostructured thermoelectric materials. This resulted in the world record of a ZT of 2.2.

The successful approach of integrated all-length-scale scattering of phonons is applicable to all bulk thermoelectric materials, the researchers said.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by Northwestern University, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. Kanishka Biswas, Jiaqing He, Ivan D. Blum, Chun-I Wu, Timothy P. Hogan, David N. Seidman, Vinayak P. Dravid, Mercouri G. Kanatzidis. High-performance bulk thermoelectrics with all-scale hierarchical architectures. Nature, 2012; 489 (7416): 414 DOI: 10.1038/nature11439

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

20 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/UXD0Myrzkno/120919135310.htm
--
Manage subscription | Powered by rssforward.com
13.06 | 0 komentar | Read More

Experiment corrects prediction in quantum theory

ScienceDaily (Sep. 19, 2012) — An international team of scientists is rewriting a page from the quantum physics rulebook using a University of Florida laboratory once dubbed the coldest spot in the universe.

Much of what we know about quantum mechanics is theoretical and tested via computer modeling because quantum systems, like electrons whizzing around the nucleus of an atom, are difficult to pin down for observation. One can, however, slow particles down and catch them in the quantum act by subjecting them to extremely cold temperatures. New research, published in the Sept. 20 edition of the journal Nature, describes how this freeze-frame approach was recently used to overturn an accepted rule of thumb in quantum theory.

"We are in the age of quantum mechanics," said Neil Sullivan, a UF physics professor and director of the National High Magnetic Field Laboratory High B/T Facility on the UF campus -- home of the Microkelvin lab where experiments can be conducted in near-absolute zero temperatures. "If you've had an MRI, you have made use of a quantum technology."

The magnet that powers an MRI scanner is a superconducting coil transformed into a quantum state by very cold liquid helium. Inside the coil, electric current flows friction free.

Quantum magnets and other strange, almost otherworldly occurrences in quantum mechanics could inspire the next big breakthroughs in computing, alternative energy and transportation technologies such as magnetic levitating trains, Sullivan said. But innovation cannot proceed without a proper set of guidelines to help engineers navigate the quantum road.

That's where the Microkelvin lab comes in. It is one of the few facilities in the world equipped to deliver the extremely cold temperatures needed to slow what Sullivan calls the "higgledy-piggledy" world of quantum systems at normal temperatures to a manageable pace where it can be observed and manipulated.

"Room temperature is approximately 300 kelvin," Sullivan said. "Liquid hydrogen pumped into a rocket at the Kennedy Space Center is at 20 kelvin."

Physicists need to cool things down to 1 millikelvin, one thousandth of a kelvin above absolute zero, or -459.67 degrees Fahrenheit, to bring matter into a different realm where quantum properties can be explored.

One fundamental state of quantum mechanics that scientists are keen to understand more fully is a fragile, ephemeral phase of matter called a Bose-Einstein Condensate. In this state, individual particles that make up a material begin to act as a single coherent unit. It's a tricky condition to induce in a laboratory setting, but one that researchers need to explore if technology is ever to fully exploit the properties of the quantum world.

Two theorists, Tommaso Roscilde at the University of Lyon, France, and Rong Yu from Rice University in Houston, developed the underlying ideas for the study and asked a colleague, Armando Paduan-Filho from the University of Sao Paulo in Brazil, to engineer the crystalline sample used in the experiment.

"Our measurements definitively tested an important prediction about a particular behavior in a Bose-Einstein Condensate," said Vivien Zapf, a staff scientist at the National High Magnetic Field Laboratory at Los Alamos and a driving force behind the international collaboration.

The experiment monitored the atomic spin of subatomic particles called bosons in the crystal to see when the transition to Bose-Einstein Condensate was achieved, and then further cooled the sample to document the exact point where the condensate properties decayed. They observed the anticipated phenomenon when they took the sample down to 1 millikelvin.

The crystal used in the experiment had been doped with impurities in an effort to create more of a real world scenario, Zapf said. "It's nice to know what happens in pure samples, but the real world, is messy and we need to know what the quantum rules are in those situations."

Having performed a series of simulations in advance, they knew that the experiment would require them to generate temperatures down to 1 millikelvin.

"You have to go to the Microkelvin Laboratory at UF for that," she said. The lab is housed within the National High Magnetic Field Laboratory High B/T Facility at UF, funded by the National Science Foundation. Other laboratories can get to the extreme temperature required, but none of them can sustain it long enough to collect all of the data needed for the experiment.

"It took six months to get the readings," said Liang Yin, an assistant scientist in the UF physics department who operated the equipment in the Microkelvin lab. "Because the magnetic field we used to control the wave intensity in the sample also heats it up. You have to adjust it very slowly."

Their findings literally rewrote the rule for predicting the conditions under which the transition would occur between the two quantum states.

"All the world should be watching what happens as we uncover properties of systems at these extremely low temperatures," Sullivan said. "A superconducting wire is superconducting because of this Bose-Einstein Condensation concept. If we are ever to capitalize on it for quantum computing or magnetic levitation for trains, we have to thoroughly understand it."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by University of Florida. The original article was written by Donna Hesterman.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. Rong Yu, Liang Yin, Neil S. Sullivan, J. S. Xia, Chao Huan, Armando Paduan-Filho, Nei F. Oliveira Jr, Stephan Haas, Alexander Steppke, Corneliu F. Miclea, Franziska Weickert, Roman Movshovich, Eun-Deok Mun, Brian L. Scott, Vivien S. Zapf, Tommaso Roscilde. Bose glass and Mott glass of quasiparticles in a doped quantum magnet. Nature, 2012; 489 (7416): 379 DOI: 10.1038/nature11406

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

20 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/bu70f71_i3M/120919135312.htm
--
Manage subscription | Powered by rssforward.com
12.36 | 0 komentar | Read More

Single-atom writer a landmark for quantum computing

ScienceDaily (Sep. 19, 2012) — A research team led by Australian engineers has created the first working quantum bit based on a single atom in silicon, opening the way to ultra-powerful quantum computers of the future.

In a landmark paper published September 19 in the journal Nature, the team describes how it was able to both read and write information using the spin, or magnetic orientation, of an electron bound to a single phosphorus atom embedded in a silicon chip.

"For the first time, we have demonstrated the ability to represent and manipulate data on the spin to form a quantum bit, or 'qubit', the basic unit of data for a quantum computer," says Scientia Professor Andrew Dzurak. "This really is the key advance towards realising a silicon quantum computer based on single atoms."

Dr Andrea Morello and Professor Dzurak from the UNSW School of Electrical Engineering and Telecommunications lead the team. It includes researchers from the University of Melbourne and University College, London.

"This is a remarkable scientific achievement -- governing nature at its most fundamental level -- and has profound implications for quantum computing," says Dzurak.

Dr Morello says that quantum computers promise to solve complex problems that are currently impossible on even the world's largest supercomputers: "These include data-intensive problems, such as cracking modern encryption codes, searching databases, and modelling biological molecules and drugs."

The new finding follows on from a 2010 study also published in Nature, in which the same UNSW group demonstrated the ability to read the state of an electron's spin. Discovering how to write the spin state now completes the two-stage process required to operate a quantum bit.

The new result was achieved by using a microwave field to gain unprecedented control over an electron bound to a single phosphorus atom, which was implanted next to a specially-designed silicon transistor. Professor David Jamieson, of the University of Melbourne's School of Physics, led the team that precisely implanted the phosphorus atom into the silicon device.

UNSW PhD student Jarryd Pla, the lead author on the paper, says: "We have been able to isolate, measure and control an electron belonging to a single atom, all using a device that was made in a very similar way to everyday silicon computer chips."

As Dr Morello notes: "This is the quantum equivalent of typing a number on your keyboard. This has never been done before in silicon, a material that offers the advantage of being well understood scientifically and more easily adopted by industry. Our technology is fundamentally the same as is already being used in countless everyday electronic devices, and that's a trillion-dollar industry."

The team's next goal is to combine pairs of quantum bits to create a two-qubit logic gate -- the basic processing unit of a quantum computer.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by University of New South Wales, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. Jarryd J. Pla, Kuan Y. Tan, Juan P. Dehollain, Wee H. Lim, John J. L. Morton, David N. Jamieson, Andrew S. Dzurak, Andrea Morello. A single-atom electron spin qubit in silicon. Nature, 2012; DOI: 10.1038/nature11449

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

20 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/2-HJNcrcjAk/120919135305.htm
--
Manage subscription | Powered by rssforward.com
12.36 | 0 komentar | Read More

Using a laser to 'see' the smallest world: Powerful laser breathes new life into an old technology for studying atomic-level structures

ScienceDaily (Sep. 19, 2012) — A multi-university team has employed a high-powered laser based at UC Santa Barbara to dramatically improve one of the tools scientists use to study the world at the atomic level. The team used their amped-up electron paramagnetic resonance (EPR) spectrometer to study the electron spin of free radicals and nitrogen atoms trapped inside a diamond.

The improvement will pull back the veil that shrouds the molecular world, allowing scientists to study tiny molecules at a high resolution.

The team, which includes researchers from UCSB, University of Southern California (USC), and Florida State University, published its findings this week in Nature.

"We developed the world's first free-electron laser-powered EPR spectrometer," said Susumu Takahashi, assistant professor of chemistry at the USC Dornsife College of Letters, Arts and Sciences, and lead author of the Nature paper. "This ultra high-frequency, high-power EPR system gives us extremely good time resolution. For example, it enables us to film biological molecules in motion."

By using a high-powered laser, the researchers were able to significantly enhance EPR spectroscopy, which uses electromagnetic radiation and magnetic fields to excite electrons. These excited electrons emit electromagnetic radiation that reveals details about the structure of the targeted molecules.

EPR spectroscopy has existed for decades. Its limiting factor is the electromagnetic radiation source used to excite the electrons -- it becomes more powerful at high magnetic fields and frequencies, and, when targeted, electrons are excited with pulses of power as opposed to continuous waves.

Until now, scientists performed pulsed EPR spectroscopy with a few tens of GHz of electromagnetic radiation. Using UCSB's free electron laser (FEL), which emits a pulsed beam of electromagnetic radiation, the team was able to use 240 GHz of electromagnetic radiation to power an EPR spectrometer.

"Each electron can be thought of as a tiny magnet that senses the magnetic fields caused by atoms in its nano-neighborhood," said Mark Sherwin, professor of physics and director of the Institute for Terahertz Science and Technology at UCSB. "With FEL-powered EPR, we have shattered the electromagnetic bottleneck that EPR has faced, enabling electrons to report on faster motions occurring over longer distances than ever before. We look forward to breakthrough science that will lay foundations for discoveries like new drugs and more efficient plastic solar cells."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by University of California - Santa Barbara.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. S. Takahashi, L.-C. Brunel, D. T. Edwards, J. van Tol, G. Ramian, S. Han, M. S. Sherwin. Pulsed electron paramagnetic resonance spectroscopy powered by a free-electron laser. Nature, 2012; 489 (7416): 409 DOI: 10.1038/nature11437

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

20 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/GcDQ_Z41sck/120919135415.htm
--
Manage subscription | Powered by rssforward.com
12.07 | 0 komentar | Read More

Can nanotubes tell of bridge collapse risk?

ScienceDaily (Sep. 19, 2012) — In August 2007, the I-35W Bridge over the Mississippi River in Minneapolis collapsed, killing 13 people and injuring 145. The collapse was attributed to a design deficiency that resulted in a gusset plate failing during ongoing construction work.

Now, an interdisciplinary team of researchers at the University of Delaware is developing a novel structural health monitoring system that could avert such disasters in the future.

Erik Thostenson and Thomas Schumacher, both affiliated faculty members in the UD Center for Composite Materials, have received a three-year $300,000 grant from the National Science Foundation to investigate the use of carbon nanotube composites as a kind of "smart skin" for structures.

In preliminary research, the two found that a carbon nanotube hybrid glass-fiber composite attached to small-scale concrete beams formed a continuous conductive skin that is exceptionally sensitive to changes in strain as well as to the development and growth of damage.

"This sensor can either be structural, where the layer of the fiber composite adds reinforcement to a deficient or damaged structure, or nonstructural, where the layer acts merely as a sensing skin," says Schumacher, who brings to the project knowledge of structural mechanics and health monitoring of large-scale structures.

Thostenson, whose expertise lies in materials processing and characterization for sensor applications, explains that because the nanotubes are so small, they can penetrate the polymer-rich area between the fibers of individual yarn bundles as well as the spaces between the plies of a fiber composite.

"The nanotubes become completely integrated into advanced fiber composite systems, imparting new functionality without altering the microstructure of the composite," he says.

Schumacher says the approach will address a major drawback of current SHM systems, which can cover only a finite number points.

"Selection of critical areas for monitoring remains subject to the owner's expertise," he explains. "The distributed sensing capability of the system we're developing significantly increases the chance of capturing hidden or localized micro-damage that can lead to catastrophic failure if not detected early."

Thostenson points out that a key advantage of this innovative sensor is that it can be bonded to existing structures of any shape or built into new structures during the fabrication and construction processes.

Based on their preliminary results, the researchers will now address such issues as sensor processing, characterization, and modeling as well as testing of components and complete structures.

Thostenson credits CCM with facilitating the kind of interdisciplinary approach that brought him and Schumacher together on the project. "It's truly a 50/50 collaboration that capitalizes on our complementary expertise," he says.

The two joke, though, about the specimen they tested at CCM during their exploratory work. "It was the smallest specimen I ever tested," says Schumacher, but the largest one Erik ever tested."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by University of Delaware, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

19 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/fnwyYbgsQKg/120919103136.htm
--
Manage subscription | Powered by rssforward.com
11.36 | 0 komentar | Read More

Ultra-distant galaxy discovered amidst cosmic 'dark ages': May be oldest galaxy ever

ScienceDaily (Sep. 19, 2012) — With the combined power of NASA's Spitzer and Hubble space telescopes as well as a cosmic magnification effect, a team of astronomers led by Wei Zheng of The Johns Hopkins University has spotted what could be the most distant galaxy ever detected.

Light from the young galaxy captured by the orbiting observatories shone forth when the 13.7-billion-year-old universe was just 500 million years old.

The far-off galaxy existed within an important era when the universe began to transit from the so-called "Dark Ages." During this period, the universe went from a dark, starless expanse to a recognizable cosmos full of galaxies. The discovery of the faint, small galaxy accordingly opens up a window into the deepest, remotest epochs of cosmic history.

"This galaxy is the most distant object we have ever observed with high confidence," said Zheng, a principal research scientist in The Henry A. Rowland Department of Physics and Astronomy at Johns Hopkins' Krieger School of Arts and Sciences and lead author of a paper appearing in Nature on Sept. 20. "Future work involving this galaxy -- as well as others like it that we hope to find -- will allow us to study the universe's earliest objects and how the Dark Ages ended."

Light from the primordial galaxy traveled approximately 13.2 billion light-years before reaching NASA's telescopes. In other words, the starlight snagged by Spitzer and Hubble left the galaxy when the universe was just 3.6 percent of its present age. Technically speaking, the galaxy has a redshift, or "z," of 9.6. The term "redshift" refers to how much an object's light has shifted into longer wavelengths as a result of the expansion of the universe. Astronomers use "redshift" to describe cosmic distances.

Unlike previous detections of galaxy candidates in this age range, which were only glimpsed in a single color, or waveband, this newfound galaxy has been seen in five different wavebands. As part of the Cluster Lensing and Supernova Survey with Hubble program (CLASH), the Hubble Space Telescope registered the newly described far-flung galaxy in four wavelength bands. Spitzer located it in a fifth band with its Infrared Array Camera (IRAC), placing the discovery on firmer ground.

Objects at these extreme distances are mostly beyond the detection sensitivity of today's largest telescopes. To catch sight of these early, distant galaxies, astronomers rely on "gravitational lensing." In this phenomenon -- predicted by Albert Einstein a century ago -- the gravity of foreground objects warps and magnifies the light from background objects. A massive galaxy cluster situated between our galaxy and the early galaxy magnified the latter's light, brightening the remote object some 15 times and bringing it into view.

Based on the Spitzer and Hubble observations, astronomers think the distant galaxy was spied at a time when it was less than 200 million years old. It also is small and compact, containing only about 1 percent of the Milky Way's mass. According to leading cosmological theories, the first galaxies should indeed have started out tiny. They then progressively merged, eventually accumulating into the sizable galaxies of the more modern universe.

These first galaxies likely played the dominant role in the epoch of reionization, the event that signaled the demise of the universe's Dark Ages. About 400,000 years after the Big Bang, neutral hydrogen gas formed from cooling particles. The first luminous stars and their host galaxies, however, did not emerge until a few hundred million years later. The energy released by these earliest galaxies is thought to have caused the neutral hydrogen strewn throughout the universe to ionize, or lose an electron, the state in which the gas has remained since that time.

"In essence, during the epoch of reionization, the lights came on in the universe," said paper co-author Leonidas Moustakas, a research scientist at NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, Calif.

Astronomers plan to study the rise of the first stars and galaxies and the epoch of reionization with the successor to both Spitzer and Hubble -- NASA's James Webb Telescope, slated for launch in 2018. The newly described distant galaxy will likely be a prime target.

Holland Ford, one of Zheng's colleagues and a co-author on the paper, commented on the findings.

"Science is very exciting when we explore the frontiers of knowledge," said Ford, a physics and astronomy professor at Johns Hopkins. "One of these frontiers is the first few hundred million years after the birth of our universe. Dr. Zheng's many years of searching for quasars and galaxies in the dawn of the universe has paid off with his discovery of a galaxy that we see as it was when the universe was less than 500 million years old.

"With his discovery, we are seeing a galaxy when it was not even a toddler," Ford said. "But this infant galaxy will in its future grow to be a galaxy like our own, hopefully hosting planetary systems with astronomers who will look back in time and see our galaxy in its infancy."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by Johns Hopkins University, via Newswise. The original article was written by Lisa DeNike.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. Wei Zheng, Marc Postman, Adi Zitrin, John Moustakas, Xinwen Shu, Stephanie Jouvel, Ole Høst, Alberto Molino, Larry Bradley, Dan Coe, Leonidas A. Moustakas, Mauricio Carrasco, Holland Ford, Narciso Benítez, Tod R. Lauer, Stella Seitz, Rychard Bouwens, Anton Koekemoer, Elinor Medezinski, Matthias Bartelmann, Tom Broadhurst, Megan Donahue, Claudio Grillo, Leopoldo Infante, Saurabh W. Jha, Daniel D. Kelson, Ofer Lahav, Doron Lemze, Peter Melchior, Massimo Meneghetti, Julian Merten, Mario Nonino, Sara Ogaz, Piero Rosati, Keiichi Umetsu, Arjen van der Wel. A magnified young galaxy from about 500 million years after the Big Bang. Nature, 2012; 489 (7416): 406 DOI: 10.1038/nature11446

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

20 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/mH1MI6vEK4k/120919135419.htm
--
Manage subscription | Powered by rssforward.com
11.36 | 0 komentar | Read More

Did a 'forgotten' meteor have a deadly, icy double-punch?

ScienceDaily (Sep. 19, 2012) — When a huge meteor collided with Earth about 2.5 million years ago and fell into the southern Pacific Ocean it not only could have generated a massive tsunami but also may have plunged the world into the Ice Ages, a new study suggests.

A team of Australian researchers says that because the Eltanin meteor -- which was up to two kilometres across -- crashed into deep water, most scientists have not adequately considered either its potential for immediate catastrophic impacts on coastlines around the Pacific rim or its capacity to destabilise the entire planet's climate system.

"This is the only known deep-ocean impact event on the planet and it's largely been forgotten because there's no obvious giant crater to investigate, as there would have been if it had hit a landmass," says Professor James Goff, lead author of a forthcoming paper in the Journal of Quaternary Science. Goff is co-director of UNSW's Australia-Pacific Tsunami Research Centre and Natural Hazards Research Laboratory.

"But consider that we're talking about something the size of a small mountain crashing at very high speed into very deep ocean, between Chile and Antarctica. Unlike a land impact, where the energy of the collision is largely absorbed locally, this would have generated an incredible splash with waves literally hundreds of metres high near the impact site.

"Some modelling suggests that the ensuing mega-tsunami could have been unimaginably large -- sweeping across vast areas of the Pacific and engulfing coastlines far inland. But it also would have ejected massive amounts of water vapour, sulphur and dust up into the stratosphere.

"The tsunami alone would have been devastating enough in the short term, but all that material shot so high into the atmosphere could have been enough to dim the sun and dramatically reduce surface temperatures. Earth was already in a gradual cooling phase, so this might have been enough to rapidly accelerate and accentuate the process and kick start the Ice Ages."

In the paper, Goff and colleagues from UNSW and the Australian Nuclear Science and Technology Organisation, note that geologists and climatologists have interpreted geological deposits in Chile, Antarctica, Australia, and elsewhere as evidence of climatic change, marking the start of the Quaternary period. An alternative interpretation is that some or all of these deposits may be the result of mega-tsunami inundation, the study suggests.

"There's no doubt the world was already cooling through the mid and late Pliocene," says co-author Professor Mike Archer. "What we're suggesting is that the Eltanin impact may have rammed this slow-moving change forward in an instant -- hurtling the world into the cycle of glaciations that characterized the next 2.5 million years and triggered our own evolution as a species.

"As a 'cene' changer -- that is, from the Pliocene to Pleistocene -- Eltanin may have been overall as significant as the meteor that took out the non-flying dinosaurs 65 million years ago. We're urging our colleagues to carefully reconsider conventional interpretations of the sediments we're flagging and consider whether these could be instead the result of a mega-tsunami triggered by a meteor."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by University of New South Wales. The original article was written by Bob Beale.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. James Goff, Catherine Chagué-Goff, Michael Archer, Dale Dominey-Howes, Chris Turney. The Eltanin asteroid impact: possible South Pacific palaeomegatsunami footprint and potential implications for the Pliocene-Pleistocene transition. Journal of Quaternary Science, 2012; DOI: 10.1002/jqs.2571

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

19 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/W-AEZjbS5oc/120919103612.htm
--
Manage subscription | Powered by rssforward.com
09.36 | 0 komentar | Read More

New screening method identifies 1,200 candidate refrigerants to combat global warming

ScienceDaily (Sep. 18, 2012) — Researchers at the National Institute of Standards and Technology (NIST) have developed a new computational method for identifying candidate refrigerant fluids with low "global warming potential" (GWP) -- the tendency to trap heat in the atmosphere for many decades -- as well as other desirable performance and safety features.

The NIST effort is the most extensive systematic search for a new class of refrigerants that meet the latest concerns about climate change. The new method was used to identify about 1,200 promising, low-GWP chemicals for further study among some 56,000 that were considered. Only about 60 of these have boiling points low enough to be suitable for common refrigeration equipment, an indication of how difficult it is to identify usable fluids.

The ongoing NIST project is a response to U.S. industry interest in a new generation of alternative refrigerants that already are required for use in the European Union.

The refrigerants now used in cars and homes are mainly hydrofluorocarbons (HFCs). They were adopted a generation ago in the effort to phase out chlorofluorocarbons (CFCs), which deplete the stratospheric ozone layer. An example is R-134a (1,1,1,2-tetrafluoroethane), which replaced ozone-depleting chemicals in automobile air conditioners and home refrigerators. R-134a now is being phased out in Europe because HFCs remain in the atmosphere for many years, yielding a high GWP. A compound's GWP is defined as the warming potential of one kilogram of the gas relative to one kilogram of carbon dioxide. R-134a has a GWP of 1,430, much higher than the GWP of 150 or less now mandated for automotive use in Europe.

Promising low-GWP chemicals include fluorinated olefins, which react rapidly with atmospheric compounds and thus will not persist for long periods.

"What industry is trying to do is be prepared, because moving from a GWP in the thousands or tens of thousands to a GWP of 150 is an enormous challenge, both economically and technologically," says NIST chemist Michael Frenkel. "We decided to leverage the tools NIST has been developing for the last 15 years to look into the whole slew of available chemicals."

The affected industry is huge: The U.S. air conditioning, heating and refrigeration equipment manufacturing industry ships about $30 billion in goods annually, according to the U.S. Bureau of the Census.

NIST has extensive experience evaluating alternative refrigerants, having previously helped the refrigeration industry find replacements for CFCs.

The new NIST method estimates GWP by combining calculations of a compound's radiative efficiency (a measure of how well it absorbs infrared radiation) and atmospheric lifetime, both derived from molecular structure. Additional filtering is based on low toxicity and flammability, adequate stability, and critical temperature (where the compound's liquid and gas properties converge) in a desirable range. The method was applied to 56,203 compounds and identified 1,234 candidates for further study. The method, which was validated against available literature data, is accurate and fast enough for virtual screening applications. The approach is similar to the large-scale virtual screening and computational design methods for discovering new pharmaceuticals.

The screening is the initial stage of a larger study funded by the U.S. Department of Energy. The next step will be to further narrow down the candidates to a couple dozen suitable for detailed investigation in refrigeration cycle modeling.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by National Institute of Standards and Technology (NIST).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. Andrei Kazakov, Mark O. McLinden, Michael Frenkel. Computational Design of New Refrigerant Fluids Based on Environmental, Safety, and Thermodynamic Characteristics. Industrial & Engineering Chemistry Research, 2012; : 120917100332001 DOI: 10.1021/ie3016126

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

19 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/LVCsh8Dw-ZE/120919103614.htm
--
Manage subscription | Powered by rssforward.com
08.36 | 0 komentar | Read More

Angling for gold: Alternative description of atomic level gold bonding

ScienceDaily (Sep. 19, 2012) — A new model provides an alternative description of atomic level gold bonding.

A study on how gold atoms bond to other atoms using a model that takes into account bonds direction has been carried out by physicist Marie Backman from the University of Helsinki, Finland, and colleagues. These findings, which are about to be published in The European Physical Journal B, are a first step toward better understanding how gold binds to other materials through strong, so-called covalent, bonds.

What scientists need is an empirical model, based on a so-called potential, that describes the gold-gold bond in a reliable way. Most previous models only accounted for interactions in the spherical electron density around the atom. Although it is suitable to describe bonds between gold atom pairs, it is not adequate to describe how surface gold atoms bond to other materials. In such a case, the density of interacting electrons is no longer spherical.

Indeed, bond angles matter when gold binds to other materials. Thus, the authors used a model based on potentials with angular dependence, referred to as Tersoff potential. It offers a compromise between including bond directionality, which is needed for covalent bonds, and keeping the computer time needed for the simulations low.

The authors used theoretical and computational analysis to study gold atoms interacting with their neighbours. They fitted their potential functions to the most important observed characteristics of gold, such as gold atoms' lattice constant, binding energy and elastic constants. Thanks to such potential functions they were then able to describe bonding in atomistic simulations. This involves, first, determining the forces on each atom based on their relative positions and second solving equations of motion, to show how the atoms move, on a very short time scale.

Building on this model, future work could, for example, involve the development of cross potentials for gold nanoparticles and nanorods in a matrix, typically used in biomedical imaging and nanophotonics.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by Springer Science+Business Media, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. M. Backman, N. Juslin, K. Nordlund. Bond order potential for gold. The European Physical Journal B, 2012; 85 (9) DOI: 10.1140/epjb/e2012-30429-y

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

19 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/Z8952Fy-sfs/120919103313.htm
--
Manage subscription | Powered by rssforward.com
08.36 | 0 komentar | Read More

Out of this world nanoscience: A computer chip that can assemble itself?

ScienceDaily (Sep. 19, 2012) — Imagine a computer chip that can assemble itself. According to Eric M. Furst, professor of chemical and biomolecular engineering at the University of Delaware, engineers and scientists are closer to making this and other scalable forms of nanotechnology a reality as a result of new milestones in using nanoparticles as building blocks in functional materials.

Furst and his postdoctoral researchers, James Swan and Paula Vasquez, along with colleagues at NASA, the European Space Agency, Zin Technologies and Lehigh University, reported the finding Sept. 17 in an article in the Proceedings of the National Academy of Sciences (PNAS) online edition.

The article details how the research team's exploration of colloids, microscopic particles that are mere hundredths the diameter of a human hair, to better understand how nano-"building blocks" can be directed to "self-assemble" into specific structures.

The research team studied paramagnetic colloids while periodically applying an external magnetic field at different intervals. With just the right frequency and field strength, the team was able to watch the particles transition from a random, solid like material into highly organized crystalline structures or lattices.

According to Furst, a professor in UD's Department of Chemical and Biomolecular Engineering, no one before has ever witnessed this guided "phase separation" of particles.

"This development is exciting because it provides insight into how researchers can build organized structures, crystals of particles, using directing fields and it may prompt new discoveries into how we can get materials to organize themselves," Furst said.

Because gravity plays a role in how the particles assemble or disassemble, the research team studied the suspensions aboard the International Space Station (ISS) through collaborative efforts with NASA scientists and astronauts. One interesting observation, Furst reported, was how the structure formed by the particles slowly coarsened, then rapidly grew and separated -- similar to the way oil and water separate when combined -- before realigning into a crystalline structure.

Already, Furst's lab has created novel nanomaterials for use in optical communications materials and thermal barrier coatings. This new detail, along with other recorded data about the process, will now enable scientists to discover other paths to manipulate and create new nanomaterials from nanoparticle building blocks.

"Now, when we have a particle that responds to an electric field, we can use these principles to guide that assembly into structures with useful properties, such as in photonics," Furst added.

The work could potentially prove important in manufacturing, where the ability to pre-program and direct the self-assembly of functional materials is highly desired.

"This is the first time we've presented the relationship between an initially disordered structure and a highly organized one and at least one of the paths between the two. We're excited because we believe the concept of directed self-assembly will enable a scalable form of nanotechnology," he said.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by University of Delaware. The original article was written by Karen B. Roberts.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. J. W. Swan, P. A. Vasquez, P. A. Whitson, E. M. Fincke, K. Wakata, S. H. Magnus, F. D. Winne, M. R. Barratt, J. H. Agui, R. D. Green, N. R. Hall, D. Y. Bohman, C. T. Bunnell, A. P. Gast, E. M. Furst. Multi-scale kinetics of a field-directed colloidal phase transition. Proceedings of the National Academy of Sciences, 2012; DOI: 10.1073/pnas.1206915109

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

19 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/cZ0pJnEHsJ8/120919103138.htm
--
Manage subscription | Powered by rssforward.com
08.36 | 0 komentar | Read More

Fireworks in the early universe

ScienceDaily (Sep. 19, 2012) — Galaxies in the early universe grew fast by rapidly making new stars. Such prodigious star formation episodes, characterized by the intense radiation of the newborn stars, were often accompanied by fireworks in the form of energy bursts caused by the massive central black hole accretion in these galaxies.

This discovery by a group of astronomers led by Peter Barthel of the Kapteyn Institute of the University of Groningen in the Netherlands is published September 19 in the Astrophysical Journal Letters.

Our Milky Way galaxy forms stars at a slow, steady pace: on average one new star a year is born. Since the Milky Way contains about a hundred billion stars, the actual changes are very slight. The Milky Way is an extremely quiet galaxy; its central black hole is inactive, with only weak energy outbursts due to the occasional capture of a passing star or gas cloud.

Bright, exotic radiation

This is in marked contrast to the 'active' galaxies of which there are various types and which were abundant in the early universe. Quasars and radio galaxies are prime examples: owing to their bright, exotic radiation, these objects can be observed as far as the edge of the observable universe. The light of the normal stars in their galaxies is extremely faint at such distances, but active galaxies can be easily detected through their luminous radio, ultraviolet or X-ray radiation, which results from steady accretion onto their massive central black holes.

Peculiar exotic objects

Until recently these distant active galaxies were only interesting in their own right as peculiar exotic objects. Little was known about the composition of their galaxies, or their relationship to the normal galaxy population. However, in 2009 ESA's Herschel space telescope was launched. Herschel is considerably larger than NASA's Hubble, and operates at far-infrared wavelengths. This enables Herschel to detect heat radiation generated by the processes involved in the formation of stars and planets at a small scale, and of complete galaxies at a large scale.

Initial inspection

Peter Barthel has been involved with Herschel since 1997 and heads an observational programme targeting distant quasars and radio galaxies. His team used the Herschel cameras to observe seventy of these objects. Initial inspection of the observations has revealed that many emit bright far-infrared radiation.

The Astrophysical Journal Letter 'Extreme host galaxy growth in powerful early-epoch radio galaxies', by Peter Barthel and co-authors Martin Haas (Bochum University, GER), Christian Leipski (Max-Planck Institute for Astronomy, Heidelberg, GER) and Belinda Wilkes (Harvard-Smithsonian Center for Astrophysics, Cambridge, USA), describes their project and the detailed analysis of the first three distant radio galaxies.

Simultaneous grow

The fact that these three objects, as well as many others from the observational sample, emit strong far-infrared radiation indicates that vigorous star formation is taking place in their galaxies, creating hundreds of stars per year during one or more episodes lasting millions of years. The bright radio emission implies strong, simultaneous black hole accretion. This means that while the black holes in the centres of the galaxies are growing (as a consequence of the accretion), the host galaxies are also growing rapidly.

The Herschel observations thereby provide an explanation for the observation that more massive galaxies have more massive black holes. Astronomers have observed this scaling relationship since the 1990s: the fireworks in the early universe could well be responsible for this relationship.

Barthel: 'It is becoming clear that active galaxies are not only among the largest, most distant, most powerful and most spectacular objects in the universe, but also among the most important objects; many if not all massive normal galaxies must also have gone through similar phases of simultaneous black hole-driven activity and star formation.'

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by University of Groningen.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. Peter Barthel, Martin Haas, Christian Leipski, Belinda Wilkes. EXTREME HOST GALAXY GROWTH IN POWERFUL EARLY-EPOCH RADIO GALAXIES. The Astrophysical Journal, 2012; 757 (2): L26 DOI: 10.1088/2041-8205/757/2/L26

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

19 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/XsAyJM_aQ6k/120919082931.htm
--
Manage subscription | Powered by rssforward.com
08.07 | 0 komentar | Read More

New processes for cost-efficient solar cell production

ScienceDaily (Sep. 19, 2012) — The competition in the photovoltaics market is fierce. When it comes to price, Asian manufacturers are frequently ahead of the competition by a nose. Now, Fraunhofer researchers are designing new coating processes and thin layer systems that, if used, could help to reduce the price of solar cells significantly.

Scientists will unveil a few of these new processes at the EU PVSEC trade show in Frankfurt from September 25 to 28.

Many people answer with a resounding "yes!" when asked if they want environmentally-friendly solar cell-based power -- though it should be inexpensive. For this reason, a veritable price war is raging among the makers of photovoltaic cells. Above all, it are the cheap products of Asian origin that are making life tough for domestic manufacturers. Tough, that is, until now: the researchers at the Fraunhofer Institute for Surface Engineering and Thin Films IST in Braunschweig are providing support to these companies. They are engineering coating processes and thin film systems aimed at lowering the production costs of solar cells drastically.

Hot wires instead of plasma

The photovoltaic industry is pinning its hopes particularly on high-efficiency solar cells that can achieve efficiencies of up to 23 percent. These "HIT" cells (Heterojunction with Intrinsic Thin layer) consist of a crystalline silicon absorber with additional thin layers of silicon. Until now, manufacturers used the plasma-CVD process (short for Chemical Vapor Deposition) to apply these layers to the substrate: the reaction chamber is filled with silane (the molecules of this gas are composed of one silicon and four hydrogen atoms) and with the crystalline silicon substrate. Plasma activates the gas, thus breaking apart the silicon-hydrogen bonds. The now free silicon atoms and the silicon-hydrogen residues settle on the surface of the substrate. But there's a problem: the plasma only activates 10 to 15 percent of the expensive silane gas; the remaining 85 to 90 percent are lost, unused. This involves enormous costs.

The researchers at IST have now replaced this process: Instead of using plasma, they activate the gas by hot wires. "This way, we can use almost all of the silane gas, so we actually recover 85 to 90 percent of the costly gas. This reduces the overall manufacturing costs of the layers by over 50 percent. The price of the wire that we need for this process is negligible when compared to the price of the silane," explains Dr. Lothar Schäfer, department head at IST. "In this respect, our system is the only one that coats the substrate continously during the movement -- this is also referred to as an in-line process." This is possible since the silicon film grows up at the surface about five times faster than with plasma CVD -- and still with the same quality of layer. At this point, the researchers are coating a surface measuring 50 by 60 square centimeters; however, the process can be easily scaled up to the more common industry format of 1.4 square meters. Another advantage: The system technology is much easier than with plasma CVD, therefore the system is substantially cheaper. Thus, for example, the generator that produces the electric current to heat the wires only costs around one-tenth that of its counterpart in the plasma CVD process.

In addition, this process is also suitable for thin film solar cells. With a degree of efficiency of slightly more than ten percent, these have previously shown only a moderate pay-off. However, by tripling the solar cells (i.e., by putting three cells on top of each other) the degree of efficiency spikes up considerably. But there is another problem: Because each of the three cells is tied to considerable material losses using the plasma CVD coatings, the triple photovoltaic cells are expensive. So the researchers see another potential use for their process: the new coating process would make the cells much more cost-effective. Triple cells could even succeed over the long term if the rather scarce but highly efficient germanium is used. However, germanium is also very expensive: in order for it to be a profitable choice, one must be able to apply the layers while losing as little of the germanium as possible -- by using the hot-wire CVD process, for instance.

Saving 35 percent in the sputter process for transparent conductive oxide

The power generated by photovoltaic cells has to be able to flow out, in order for it to be used. To do so, usually a contact grid of metal is evaporated onto the solar cells, which conducts the resulting holes and electrons. But for HIT cells, this grid is insufficient. Instead, transparent, conductive layers -- similar to those in an LCD television -- are needed on the entire surface.

This normally happens through the sputter process: ceramic tiles, made from aluminum-doped zinc or indium-zinc oxide, are atomized. The dissolved components attach to the surface, thereby producing a thin layer. Unfortunately, the ceramic tiles are also quite expensive. Therefore, the researchers at IST use metallic tiles: They are 80 percent cheaper than their ceramic counterparts. An electronic control ensures that the metal tiles do not oxidize. Because that would otherwise change the manner in which the metal sputters. "Even though the control outlay is greater, we can still lower the cost of this production process by 35 percent for 1.4 square meter coatings," says Dr. Volker Sittinger, group manager at IST.

The research team intends to combine both processes over the long term, in order to make thin-coated solar cells more cost-effective and ultimately, more profitable. "You can produce all silicon layers using the hot-wire CVD, and all transparent conductive layers through sputtering with metal tiles. In principle, these processes should also be suitable for large formats," states Sittinger. However, the processes being used are not production processes quite yet: Even if the researchers already apply the processes to a countless number of square centimeters, it will still take about three to five years until they can be used in the production of solar cells.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by Fraunhofer-Gesellschaft.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

19 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/Xa3FXfoW08Q/120919082933.htm
--
Manage subscription | Powered by rssforward.com
08.07 | 0 komentar | Read More

University students put off-the-shelf helicopters to work

Written By empapat on Selasa, 18 September 2012 | 19.36

ScienceDaily (Sep. 18, 2012) — What amounts to serious scientific research could, at first glance, be mistaken for students at The University of Alabama in Huntsville letting off a little stress with radio-controlled helicopters.

On a recent sunny, humid day, the air near the university's Optics Building is filled with the buzz of small helicopters hovering over parking lots, their tiny video cameras sending back amazingly clear images of a wide expanse of ground below.

It's all part of work being done by UAHuntsville's Systems Management and Production Center at Von Braun Research Hall. Directed by Dr. Gary Maddux, SMAP is the largest of UAHuntsville's 15 research centers.

Under the program, several UAHuntsville students are working to develop micro-UAVs that could provide low-cost surveillance while enhancing the variety of uses for these UAVs. The U.S. Army's Aviation and Missile Research and Development Center (AMRDEC) on Redstone Arsenal provided the original program funding.

William Sabados and Norven Goddard direct a small cluster of students working to enhance the use of small, commercial, off-the-shelf helicopters to be more useful for both military and commercial purposes.

While the research is conducted at UAHuntsville and most student researchers are pursuing various UAHuntsville technical degrees, students from other north Alabama universities, and even a few from area high schools, have gravitated towards the program, said Sabados, who pairs students with research projects.

Those students span a wide range of technology curricula, from computer science and software engineering majors to aerospace, mechanical, and electrical engineering majors. Even graphics majors are getting involved, which Goddard said adds an important dimension to the work of the student teams.

"They have the ability to put it on paper and see how the design actually flows together. Visual is the way to go," Goddard said.

Terming the high level of technological capabilities of the student researchers "a state asset," Goddard said he hopes most of the graduates will find employment for their skills and talents right here in Alabama.

Goddard, of the Space and Missile Defense Command's Future Warfare Center, calls the program a resource that allows the military and first responders to tap the skills and abilities of students.

Their research supports the ongoing evolution of military intelligence, surveillance, and reconnaissance (ISR). ISR platforms are getting smaller and less costly, important in an era of increasing constraints on military R&D budgets. At altitudes of just a few hundred feet, they can peer down on enemy troops below while remaining practically invisible. Tighter budgets and the need to capture the benefits of emerging technologies also push the desire for what Goddard terms "the 80 percent solution."

"We want to take these emerging technologies and apply them where we can get a 70 to 80 percent solution that we can use right now." From a military standpoint, he said this eliminates some of the need for costly R&D programs that might develop a technology to a complete solution, only to have that technology become obsolete by the time that solution is achieved.

Sabados explained that "disruptive technologies" such as mini-UAVs have the potential to change the way surveillance is done, both for soldiers in theater and for first responders or law enforcement authorities.

Today's mini-UAVs, Sabados said, are the disruptive technology that larger UAVs used to be. "Ten years ago, UAVs were considered a disruptive technology, but now you see them on the news every night," he said.

Everything in aerial surveillance is getting smaller, lighter and less expensive. The mini-copters often carry tiny cameras that are marvels of miniaturization. Student researcher Aaron Laney holds out a camera currently being used that's about the size of a stack of three dominoes, and said the trend is for them to get ever smaller.

And while the military benefits are obvious, law enforcement and first responders are showing increasing interest in the tiny aerial platforms with their advanced cameras and other sensors. First responders could use them for low-cost surveying of post-tornado damage, or to survey the scene after a bad auto accident. Law enforcement sees uses ranging from looking for fugitives to find missing persons such as small children.

While military and civilian uses drive the research, Goddard also points to the influence of hobbyists, ranging from college students to retired engineers, who continue to push the envelope of new innovation. "The hobbyists have turned the game on the UAV community," he said. "They've pushed so hard on the designer and manufacturer community that they're producing components that are equal to those of large UAVs such as the Predator. The capabilities are becoming truly astronomical."

Student researcher and computer science major Aaron Laney comments on a video of a six-rotor mini-copter flying over a UAHuntsville parking lot. The team, he said, is now able to program a GPS chip that allows the mini-copters to fly a semi-autonomous flight pattern. "You just punch a few keys and tell them to fly the mission."

Goddard predicts a near future capability that will allow re-programming the GPS for totally autonomous flight of a swarm of several helicopters flying in close formation. "What you would do is program, launch, and walk away," he quipped.

That "swarm" of several helicopters are to fly in close formation. Each helicopter would carry a different sensor, allowing observers on the ground to get better overall situational awareness of what's on the ground that they could not otherwise see.

Another push is for low-cost production. The confluence of UAV miniaturization and low-cost, three-dimensional printing may allow for parts to be designed using computer-aided design programs, then instantly manufactured using 3D printers, according to Goddard. He said the original program, nearing completion, has been quite successful with ideas emerging that will give new students new programs to explore.

The experience from the research activities can look good on a resume, said Goddard. "It gives them the practical experience to go along with classroom theory," he said. "They'll have something to put on their resumes that will put them miles ahead of other graduating students."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:


Story Source:

The above story is reprinted from materials provided by University of Alabama Huntsville.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.

18 Sep, 2012


-
Source: http://feeds.sciencedaily.com/~r/sciencedaily/top_news/top_technology/~3/8t3Oc0WvTac/120918075633.htm
--
Manage subscription | Powered by rssforward.com
19.36 | 0 komentar | Read More
techieblogger.com Techie Blogger Techie Blogger