One of the critical steps toward performing computational biology simulations, using boundary-fitted / volume filling mesh based integration methods, is having available topologically faithful feature geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field-data distributions, they just have to be teased out using image processing tools. The geometric features that need to be captured from the digital image data are complex and three-dimensional. Therefore, the process and tools we have developed to perform feature extraction work as volumetric image data represented as data-cubes, these data-cubes are formed from stacks of 2-D images. This allows us to take advantage of curvature information, of 2-D surfaces in 3-D space, during the segmentation and feature extraction process. The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features using an isosurfacing technique, and 3) building the computational mesh from the extracted feature geometry. "Quantitative" image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must from a closed water-tight surface.