Comp 770 Project (proposal slides)

Image-based Tree Branches Recovery

Yilin Wang

**Motivation**

Tree is
difficult to be reconstructed because of its complex geometry. To achieve a
realistic model, branches and leaves of the tree must be processed separately,
and branch reconstruction, which will give the skeleton of the tree, is a
crucial step. However, how to recover occluded branches is still a difficult
work. Most existing works use images for generating the 3D point cloud, and in
this project we want to extract more useful information from images to refine
the branch reconstruction.

**Goals **

To recover
the tree branches from images, and to make the procedure as automatic as
possible

Subtasks

1. 3D structure generation
from source images. The position of points of branches and leaves will be
computed based on structure from motion approach and refined manually.

2. Reconstruction of trunk and
visible branches.

3. Recovery of occlude
branches.

**Prior
works**

Existing
techniques for tree modeling can be classified as rule-based and image-based.

Rule-based
techniques use a small set of generative rules or grammar to create branches
and leaves, which is good at producing impressive-looking trees but difficult
to design well. Prusinkiewicz et al. [4] developed a series of approaches based
on the idea of generative L-system. Weber and Penn [9] used geometric rules to
produce realistic looking trees, and there are also a number of techniques
[5][1] that took into account various kinds of interactions between the tree
and the environment.

Image-based
approaches directly reconstruct the tree using image samples, which is easy to
apply but has limited realism. Tan et al. developed a semi-automatic modeling
system [8], in which obscured branches are predicted by the shape patterns of
visible branches. Neubert [3] proposed a particle flow method to form twigs and
branches. Additionally, Han [2] described a Bayesian approach to modeling
tree-like objects from a single image using good priors, and Sakaguchi and
Shlyakhter’s approaches[6][7] represent the rough shape of the tree by the
visual hull of the tree from silhouettes.

**Update 1 (March 15, 2009):** 3D points cloud generation from source
images

The source
images, depth maps, and calibrated projection matrices used in this project are
provided by Professor Jan-Michael Frahm
and **David
Gallup. Based on these data, the first subtask is to extract the tree from
source images. For the simplicity, source images are roughly cropped manually to
remove most uninterested objects. One cropping example is shown in Figure 1.**

** **

**Figure 1. Left image is the original source image, and right image is the
corresponding cropped image**

Figure 2 (left)
shows the original depth map for the source image shown in Figure 1. We can see
that this depth map is too coarse, and many details (especially the boundary of
the tree) are incorrect. To obtain better depth estimation, source images are segmented
into small regions, and all the pixels in one region will choose the most popular
value in the region as their new depth, as shown in Figure 2 right. Here the
mean shift approach [10] is used for
the segmentation, and the segmentation result is shown in Figure 3.

**Figure 2. Left image is the original depth map, and right image is the rectified
depth map**

Figure 3. Mean Shift Segmentation

With the new
depth map, the next step is to extract the tree from the **cropped image. Depths are used
to separate the object into foreground (the tree) and background (the man-made
objects) by thresholding (Figure 4 left), and then colors are used to remove **residual
**man-made
objects by clustering method [11] (Figure 4 right). **

** **

**Figure 4. Tree extraction first by depth (left image), and then by color (right
image)**

**The final
extracted images are used to compute the 3D position of the tree by back-projecting:
Suppose m (x, y) is a 2D pixel on the image, M is its corresponding 3D point, K
is the intrinsic matrix of the camera, P is the extrinsic matrix, and d(x, y)
is the depth of m, then. The reconstructed 3D points cloud is shown in Figure 5.**

Figure 5. 3D points cloud for the reconstruction

**Reference**

[1] W. V. Haevre. A simple but effective algorithm
to model the competition of virtual plants for light and pace. In *Journal
of WSCG*, page 2003, 2003.

[2] F. Han and S.-C. Zhu. Bayesian reconstruction
of 3d shapes and scenes from a single image. In *LK ’03: Proceedings of
the First IEEE International Workshop on Higher-Level Knowledge in 3D Modeling
and Motion Analysis*, page 12, Washington, DC, USA, 2003. IEEE Computer
Society.

[3] B. Neubert, T. Franken, and O. Deussen.
Approximate image-based tree-modeling using particle flows. *ACM Trans.
Graph.*, 26(3):88, 2007.2

[4] P. Prusinkiewicz, M. James, and R. Mˇech.
Synthetic topiary. In *SIGGRAPH ’94: Proceedings of the 21 ^{st}
annual conference on Computer graphics and interactive techniques*, pages
351–358, New York, NY, USA, 1994. ACM.

[5] P. Prusinkiewicz, L. M¨undermann, R. Karwowski,
and B. Lane. The use of positional information in he modeling of plants.
In *SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer
graphics and interactive techniques*, pages 289–300, New York, NY, USA,
2001. ACM.

[6] T. Sakaguchi. Botanical tree structure modeling
based on real image set. In *SIGGRAPH ’98: ACM SIGGRAPH 98 Conference
abstracts and applications*, page 272, New York, NY, USA, 1998. ACM.

[7] I. Shlyakhter, M. Rozenoer, J. Dorsey, and S.
Teller. Reconstructing 3d tree models from instrumented photographs. *IEEE
Comput. Graph. Appl.*, 21(3):53–61, 2001.

[8] P. Tan, G. Zeng, J. Wang, S. B. Kang, and L.
Quan. Image-based tree modeling. In *SIGGRAPH ’07: ACM SIGGRAPH 2007 papers*,
page 87, New York, NY, USA, 2007. ACM.

[9] J. Weber and J. Penn. Creation and rendering of
realistic trees. In *SIGGRAPH ’95: Proceedings of the 22 ^{nd} annual
conference on Computer graphics and interactive techniques*, pages 119–128,
New York, NY, USA, 1995. ACM.