Showing posts with label News. Show all posts
Showing posts with label News. Show all posts

Monday, March 05, 2007

Special issue on 3D acquisition technology for cultural heritage

Machine Vision and Applications has recently published a special issue on 3D acquisition for cultural heritage edited by Luc Van Gool and Robert Sablatnig.

Table of Contents:

  1. Three-dimensional acquisition of large and detailed cultural heritage objects
    • Gabriele Guidi, Bernard Frischer, Michele Russo, Alessandro Spinetti, Luca Carosso, and Laura Loredana Micoli

  2. Petroglyph digitization: enabling cultural heritage scholarship
    • George V. Landon and W. Brent Seales

  3. A System for 3D Modeling Frescoed Historical Buildings with Multispectral Texture
    • N. Brusco, S. Capeleto, M. Fedel, A. Paviotti, L. Poletto, G. M. Cortelazzo, and G. Tondello

  4. Recent Developments in 3D Multi-modal Laser Imaging Applied to Cultural Heritage
    • François Blais and J. Angelo Beraldin

  5. Web-based 3D Reconstruction Service
    • Maarten Vergauwen and Luc Van Gool

Friday, September 22, 2006

Computer Science & Illusion

As noted on Robert Burke's blog, Trinity College Dublin (TCD) recently hosted a great lecture by Prof. Andrew Blake. He really runs the gambit on topics in computer vision covering various theories and applications.


Part 1:


Part 2:


If you prefer not to stream: Full Movie Download (118MB WMV).
Also, the PowerPoint slides are available (44MB ZIPPED PPT).

Monday, September 18, 2006

Is Computer Science losing its coolness?

The Seattle Times has just run an interesting story, titled Where'd The Whiz Kids Go?, on the lack of CS students in Washington Universities. It seems that this is a nation-wide problem that has gotten big enough to get the attention of politicians.

From the article:

"... despite the seemingly limitless potential of computers, educators are having a tougher time than ever convincing students to pursue the field. It can be hard work. Boring, even. And there's that enduring, if unfair, image problem. Picture the socially inept geek hunched over a screen at 3 a.m., Coke in hand, pecking away at pages of incomprehensible code.

'There was such a boom of interest in the '90s, and now you get the sense around the country that computer science is past its prime. But the most exciting stuff is still in front of us.'

Meanwhile, Microsoft continues to add workers locally at the rate of 4,000 a year. In this year's record class of 5,400 UW freshmen, 300 say they're hoping to graduate in computer science or engineering. Even if none dropped out or changed majors, the class of 2010 wouldn't amount to a month's supply of new workers needed just at Microsoft's Redmond campus."
There certainly does not seem to be an easy solution to this shortage. It is also interesting to note that the ACT 2006 National Report shows that 2.05% of test takers are interested in at least a 4-year Computer Science or Math degree.

Wednesday, September 13, 2006

Interactive Tabletop Museum Exhibits

In the most recent issue of IEEE Computer Graphics and Applications, there is an interesting article titled "Interactive Tabletop Exhibits in Museums and Galleries."

From the article:

"The museum experience is an unusually tactile, sensual one, and the standard keyboard-mouse-and-screen setup might seem out of place. This trend toward sensual involvement is particularly noticeable in tabletop displays, as they appeal to two aspects of familiar daily life: the horizontal surface as a workspace, and hand gestures (or common objects) as tools for manipulating information."
The full article can be viewed online (PDF). Readers should notice that the article contains a nice sampling of various interactive/tabletop exhibits. The Tilty Table, in particular, demonstrates that interactivity and intuitive access can (and should) be combined in the museum experience. A sample video is available (MOV).

Tuesday, September 12, 2006

Smart Video Surveillance

The Journal News has conducted an interview with Arun Hampapur manager of IBM's Exploratory Computer Vision Group. The focus is mainly on how the 9/11/2001 attacks changed their research focus in computer vision.

From the Interview:

"Q:How did 9/11 affect your field?
A:Pre-9/11, the success stories in computer vision were around machine vision: How you inspect a printed circuit board while it is being manufactured. Now the biggest application of computer vision would be security. And there are two pieces to that security puzzle. Biometrics is answering the question, 'Who is this person?' Surveillance is answering the question, 'What is going on?'

Q:How does it work?
A:You can apply two kinds of functionality. One I called real-time alerts. You have a port and you have a fence, you don't want anyone jumping the fence. Or you have a retail store with a loading dock, and you don't want anyone on the loading dock past 9 o'clock. These are known conditions, for which you can say, 'OK, so if someone shows up on the loading dock after 9, tell me.'The second is being able to find things. Security is a kind of a cat-mouse game. Sometimes something becomes relevant only after the fact. If you remember the Washington sniper incident, somebody said there was a white van at the first scene, and then the police spent a lot of energy trying to look for the white van. There was no technology at that time which could use cameras to find white vans."
An overview (PDF) of this work has been published in IEEE Transactions on Signal Processing.

Friday, September 08, 2006

3D Photo Tours

With all the recent web coverage of Microsoft Photosynth, I thought that I should post some of its foundational work here.

From the paper:

"Our system consists of an image-based modeling front end, which automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo navigation tool uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. We demonstrate our system on several large personal photo collections as well as images gathered from photo sharing Web sites on the Internet."
From a computer vision standpoint, they claim to be using a modified SIFT for wide-baseline matching. While this does not appear to be revolutionary results, it should be understood that, in this case, presentation is everything. This work appears to have a very intuitive interface that could revolutionize image search/retrieval/browsing. Check out this Interactive Demo of a trimmed-down version. Also, here is a nice video (120MB MOV) from SIGGRAPH.

Full Paper (PDF)

Wednesday, September 06, 2006

Organic Pixels from Butterflies

While this falls outside the normal areas covered here, it does appear to be some rather fascinating work. Besides, what good is work in graphics without a good display.

From the article:

"The wings of the male Cyanophrys remus are bright metallic blue on one side, thought to attract mates, and a dull green on the other to act as camouflage. ...each side of the wing contained different photonic structures. The metallic blue colour is produced by scales that are photonic single crystals whereas the dull green is the result of a random arrangement of photonic crystals. This randomly arranged structure may have powerful applications. The crystals can actually produce different colours – green, yellow and blue – depending on their orientation, but the overall effect in Cyanophrys remus is a dull green. The team also found a way to make the crystals generate red reflections. The red-green-blue palette could be used for flat-panel visual displays... by making an array of crystals mounted on microelectromechanical arms that could change their orientation. In that way it would be possible for each 'pixel' to produce red, green or blue."
From an outsiders perspective, this is quite amazing. I will leave those more versed in the area to make observations about how novel/unique this work is. The results, published in Physical Review E, can be found in this paper (PDF).

Monday, September 04, 2006

Google's Manual Image Recognition

Google has recently added a collaborative Image Labeler application to their image search.

From the site:

"Google Image Labeler is a new feature of Google Image Search that allows you to label random images and help improve the quality of Google's image search results. Each user who wants to participate will be paired randomly with a partner who's currently online... Over a 90-second period, both participants will be shown the same set of images and asked to label each image... based in part on technology licensed from and developed at Carnegie Mellon University."

This is actually a fun little game to play. However, it seems to be missing some depth. The work this is based on, by Luis von Ahn at CMU, is a game first and an information extractor second. He has written about this in Computer (PDF) and has also published (PDF) the work. While this certainly seems like a good use of abundant Web users, it is an interesting turn considering their recent acquisition of Neven Vision as discussed on Google's blog. I wonder if at some point Google will implement user verification of automated metadata extraction or if these two paths will stay separate.

Thursday, August 31, 2006

Binary-coded Shutter for Deblurring Still Images

Researchers at Mitsubishi Electric Research Laboratories (MERL) have developed a new filter system for digital cameras. The Coded Exposure Photography: Motion Deblurring using Fluttered Shutter uses a shuttering technique that effectively applies a binary-encoded timing filter over a continuous sequence of images.

Full Paper (PDF)

Very nice results. It will be nice to see this, or something similar, introduced in consumer camera.

Wednesday, August 30, 2006

Insect Techniques for High Dynamic Range Video

Researchers at Adelaide University, Australia are developing Image Processing techniques that replicate the vision system of flies.

From the article:

"...conducting experiments that involved recording the activity of fly brain cells as they were shown different images. 'We were amazed at the extra detail they were able to extract from the dark parts of a scene. This led us to test exactly how they were able to do this and to reproduce the processing in electrical circuits and computer simulations.' Unlike a camera, flies and other animals can tune their eyes to the light levels of different regions of an image independently. 'In nature, the individual cells of the eye adjust to a part of the image independently in order to capture the maximum amount of information about the scene..."
Example Video (MP4)

While the results from the video look promising, I was unable to find any papers on this work, so I am a little hesitant to claim this work as novel. It seems to fall easily under the umbrella of High Dynamic Range Imaging (HDRI). Paul Debevec has done a large amount of work in this area already. If you are interested, He has posted a nice collection of papers and links here.

Tuesday, August 29, 2006

Mood Dependent Art

It is always nice to see novel combinations of research areas that produce innovative applications. Recently, researchers from the Universities of Boston, USA and Bath, UK have developed such a system called "Empathic Painting: Interactive stylization using observed emotional state." From a Computer Vision perspective, the work includes facial expression recognition. Also, Image Processing and graphics are used to create a real-time painting filter. While these areas are not ground-breaking by themselves, together they make a very unique and creative use of technology.

Links from the site:
Full Paper (PDF)
Example Video (AVI)

This work was published in Proceedings of the 4th international symposium on Non-photorealistic animation and rendering
2006
. These proceedings include some other very interesting projects.

Monday, August 28, 2006

Stereo Reconstruction for Improved Automotive Safety

In a recent news release, Toyota describes a new safety system that implements numerous computer vision technologies.

These techniques include(from the article):

"...a newly developed stereo camera to detect pedestrians and support emergency collision evasion maneuvers by the driver. ...three-dimensional object detection information from the stereo camera to detect not only vehicles and obstacles, but also pedestrians. A near-infrared projector located in the headlights supports nighttime detection... The system retracts the seatbelts and warns the driver when it determines a high possibility of a collision. If the driver does not brake, the Pre-crash Brakes are applied to reduce collision speed."

This leads to the question of how long until we have black box recorders in every automobile. Continuous data acquisition from this many sensors would certainly help with accident reconstruction.

Friday, August 25, 2006

Capturing 3D Fluid Surfaces

Researchers at the University of Delaware are:

"...proposing a novel approach for accurately reconstructing three-dimensional fluid surfaces through the design of an experimental system using a light field camera array that can simultaneously capture different views of a fluid surface.
      The light field camera array features a number of digital cameras, from 16 to 128, with specially modified flashes, lenses and apertures. Instead of one flash, each camera is equipped with four.
      ...the system works by placing a known pattern beneath the surface, with each camera in the array observing a distinct time-varying distortion pattern. A sampled fluid surface can then be measured by analyzing the distortions. For surface reconstruction, the researchers plan to develop an algorithm to minimize the error relative to the sampled data."

This should be an interesting project to follow. The previous work that I have seen with multi-flash cameras captured static scenes, so I am curious how Jingyi Yu will modify the system to handle dynamic fluid.