1 Introduction
Many problems in medical imaging analysis and computer vision involve the classification of objects based on their shapes or on their sizes and shapes. A significant amount of research and activity has been carried out in recent decades in the general area of shape analysis.
Throughout this work we are going to work with bodies, i.e. geometrical objects with bounded boundaries. Several mathematical frameworks have been proposed in the literature to deal with such objects, three of these being the most widely used. Firstly, functions can be used to represent closed contours of the objects (curves in 2D and surfaces in 3D); secondly geometrical objects can also be treated as subsets of and, finally, these geometrical objects can be described as sequences of points that are given by certain geometrical or anatomical properties (landmarks). Shapes, in all these settings, are embedded into a space which is not a vector space (in a large number of cases it is a smooth manifold) and on which no natural metric is defined. This makes the definition of statistics particularly difficult; for example, there is no simple explicit way to compute a mean (Pennec06).
Recently we have considered the space of planar shapes, represented by simple closed plane curves with a Sobolevtype metric (Gualetal15). This space has the property of being isometric to an infinitedimensional Grassmann manifold of 2dimensional subspaces, and we have used this isometry to compute geodesics, distances between shapes and mean shapes and we have applied these concepts in order to study different biomedical applications. The corresponding theory for the shape space of surfaces was generalized by (Baueretal11). However, these results consider parameterized curves and surfaces.
In this approach, the contour of each body (curve in , surface in , or hypersurface in ), is represented by a mathematical structure named current, where unparameterized curves and surfaces are considered. This framework is not limited to a particular kind of data. Indeed, it provides a unifying framework to process any set of points, curves and surfaces or a mixture of these. No hypothesis on the topology of the shapes is assumed. In particular, it is robust to changes of connectivity of the structures. Moreover, it is weakly sensitive to the sampling of shapes and it does not depend on the choice of parameterization. However, the main advantage of this setting is that shapes are embedded into a vector space provided with an innerproduct; hence, it is possible to use easy statistical tools.
A current is a mathematical object which has been proved relevant for modeling geometrical data like curves and surfaces (VaillantGlaunes05; Glaunesetal06; Durrlemanetal09).
From integration on manifolds (Morgan08; Lang95), if denotes the space of differential forms in , each pdimensional submanifold in (in particular, may be the contour of a geometrical object) can be represented by an application that integrates each pform along , i.e. by an application
such application , is called a current.
In addition, it is possible to associate a subspace of currents to a Reproducing Kernel Hilbert Space (RKHS) by duality. A RKHS is a Hilbert space of mappings which has useful properties. Moreover, these associations allow us to represent each set of piecewisedefined manifolds by a function in a RKHS (Durrleman10).
Given a set of geometrical objects (curves or surfaces), our aim is to apply classification techniques developed for Euclidean spaces in order to divide the objects into appropriate clusters. A Hilbert space and, even more so, a RKHS can be considered the natural extension of the usual Euclidean spaces . The completeness of Hilbert spaces gives a framework in which to work with infinitedimensional vectors as the limit of finitedimensional vectors.
This paper arose as the result of an important study conducted by the Valencian Institute of Biomechanics, the ultimate objective of which was to help decision makers (parents/relatives/children) in the size selection process when shopping online for children’s wear.
A 3D anthropometric study of the child population in Spain was carried ot for that purpose. After the study was completed, a database was generated consisting of randomly selected Spanish children between and years of age. They were scanned using the Vitus Smart 3D body scanner from Human Solutions, a nonintrusive laser system formed by four columns housing the optic system, which moves from head to feet in ten seconds, performing a sweep of the body. Our work focuses on one of the aims of this study: to define of an efficient sizing system.
A standard sizing system classifies a specific population into homogeneous subgroups based on certain key body dimensions
(NormaUNE; ChungaetAl07; Ibanez2012). Most of the standard sizing charts propose sizes based on intervals over just two or three anthropometric dimensions. However, correlations between anthropometric measures show great variability in body proportion and as a result it is not possible to cover so many different body morphologies with these kinds of models.In this paper, instead of using clustering methods to divide the population into sizes by simply using a set of anthropometric variables, we propose to use the body shapes represented by currents and the wellknown kmeans algorithm in the corresponding space.
The original kmeans algorithm (Steinhaus56; Lloyd57), endeavors to find a partition such that the sumofsquares error between the empirical mean of a cluster and the objects in the cluster is minimized. It tries to approximate this optimum partition by iterating. Starting with arbitrary initial cluster centers, an initial partition is obtained, assigning each object to its closest cluster center. Next, the new cluster centers are recalculated as the mean of the observations of the clusters resulting from the previous step. This loop is continued until no further changes are made. Many procedures were developed in subsequent decades to improve this classic algorithm; see e.g. Kanungo2002 and Nazeer2009. Even though the kmeans algorithm was first proposed over years ago, it is still one of the most widely used algorithms for clustering (Jain2010).
Our implementations have been written with Matlab14.
The article is organized as follows: Sections 2 and 3 concern the theoretical concepts of currents and Reproducing Kernel Hilbert Spaces. In section 4 the kmeans algorithm in the RKHS space is introduced. An experimental study with synthetic figures is conducted in Sections 5 and 6. The application for classifying children’s body shapes is detailed in Section 7. Finally, conclusions are discussed in Section 8.
2 From bodies to elements in a Reproducing Kernel Hilbert Space through currents
Let be bodies in whose boundaries , , are smooth hypersurfaces in . In this section we introduce the theoretical foundations to represent as elements in a Reproducing Kernel Hilbert Space (RKHS). In order to do that, we will first represent the hypersurfaces as geometrical currents.
Currents were introduced by De Rham in 1955 and by the 1960 paper by Federer and Fleming on ‘Normal and Integral Currents”, which was awarded the 1986 AMS Steele Prize for a paper of fundamental or lasting importance; but their use in computational anatomy is recent (J. Glaunés. PhD thesis. 2005).
Let denote the space of continuous forms on . The space of currents on is the topological dual ; i.e. the space of linear and continuous forms on ; and it is a fact that every hypersurface of with a finite volume can be represented by an element of . That is, from integration on manifolds (Morgan08), we know that any dimensional form of can be integrated along the hypersurface , which associates with a current such that:
(2) 
The application is injective but not surjective (not all the currents can be represented by integration on a hypersurface (geometrical currents)).
Suppose that the hypersurface is a parameterized surface with , then , and
(3) 
This representation is fully geometric in the sense that it only depends on the hypersurface structure and not on the choice of parameterization. Moreover the representation of a surface as a geometrical current distinguishes between isometric hypersurfaces (that is, hypersurfaces obtained as rotations and/or translation). On the other hand, the opposite current represents the same hypersurface but with the opposite orientation (since the flux through the hypersurface has the opposite sign in this case).
2.1 Vectorial representation of geometrical currents
A form can be associated with a vector field on thanks to the isometric mapping between the form on and the vectors on . Then,
(4) 
Formally, the association between forms and vectors is given by the Hodge star operator and duality (doCarmo12).
2.2 Particular cases: planar curves and surfaces
As stated in the introduction, we are interested in the particular cases of planar closed curves and compact surfaces (contours of bodies in or ).
Let be a parameterized regular oriented simple curve in . We associate with the function (geometrical current)
(5) 
where is a vector field in and denotes the inner product in .
Let be an orientable parameterized surface in given by ; that is, . We associate with the function (geometrical current)
(6) 
where is a vector field in , denotes the inner product in , and .
Then, to characterize hypersurfaces (mainly curves and surfaces) from the above expressions, we measure how these integrals vary as the vector field varies. However, instead of considering all the vector fields, we will define a test space of squareintegrable vector fields where varies. In particular, as in Durrleman10, we will choose as the test space a vectorvalued Reproducing Kernel Hilbert Space (RKHS).
It is important to note that in this case the application will not be injective; that is, the same geometrical current , as a map defined in a RKHS, may represent two different hypersurfaces. Therefore, the choice of the appropriate RKHS will depend on the application proposed.
2.3 Operatorvalued kernels and test space vectorvalued RKHS.
This section gives a definition of a vectorvalued Reproducing Kernel Hilbert Space from the Riesz representation theorem and studies its properties.
The abstract theory of scalarvalued RKHS was developed by Aronszajn50. A scalarvalued RKHS is a Hilbert space of functions with some practical properties. In recent years years, the study of RKHS has been extended to vectorvalued functions (the space contains vector fields from to ) (see Carmelietal06, Micchellietal05 and Caponnettoetal08) and it has now become a widely studied theory .
Let be the Banach space of bounded linear functions from to .
Definition 2.1
Let be a Hilbert space of vector fields from to . An operator is said to be an operatorvalued reproducing kernel (rk) associated with if

for every , (where , ) and,

satisfies the ”reproducing property”; that is, and
Definition 2.2
Let be a Hilbert space of vector fields from to . is a vectorvalued RKHS if there is an operatorvalued reproducing kernel (rk) associated with .
The next theorem (see below) is a sort of converse to this: if a function is both symmetric and positive definite, then there is a Hilbert space of vector fields from to for which it is a reproducing kernel.
Definition 2.3
A function is said to be an operatorvalued positive definite and selfadjoint kernel if for each pair , is a selfadjoint operator and
for every finite set of points in and in .
Theorem 2.4
If K is an operatorvalued positive definite and selfadjoint kernel, then there is a unique RKHS, , such that is the operatorvalued reproducing kernel (rk) associated with .
It is essential to bear in mind that the proof of this theorem is based on constructing the space through the completion of .
For , , define
(7) 
The vectorvalued RKHS associated with the kernel , that, from now on, will be denoted as , is the closure of , that is, the span of vector fields of the form
for every and is dense in . For this reason, the inner product in between is the limit of expression (7) when , tend to infinity.
Having established the vectorvalued RKHS test space , currents will be evaluated in ; that is, the space of currents considered is (it contains the continuous linear functions from to ) which includes the geometrical currents as a subset. The space of currents is a vector space with the operations sum () and product () as in a standard space of functions (Durrleman10).
Therefore, the idea is to build the vector space spanned by the vector fields of the form and to make this space complete by adding the limit of every Cauchy sequence to it. This construction make it possible to process discrete meshes of surfaces and continuous surfaces (limit of such a finite combination) in the same setting.
2.4 Curves and Surfaces as elements in a vectorvalued RKHS
We are now going to use the properties of the RKHS, in order to rewrite the geometrical currents associated with curves in and surfaces in .
Let be a vector field in . Then, by using Eq. (5), the geometrical current associated with a curve becomes:
(8) 
and by Eq. (6), the geometrical current associated with the parameterized surface is:
(9) 
where , , is the orthogonal vector to the surface at the point , and and .
Until now, each hypersurface has been associated with an element in the vector space . However, the Riesz Frèchet Theorem (Conway13) establishes that there is an isometric, linear, bijective mapping , defined by , . As a consequence, the space is isometric to , and then each hypersurface can be associated with a vector field in .
Therefore, as a result of the RieszFrèchet Theorem, it is possible to represent the parameterized curve defined in , by an element in ; that is
(10) 
where denotes the isometric element from the RieszFrèchet Theorem, and to represent a parameterized surface by
(11) 
Consider now that the curve is only known at a finite number, , of points , that constitute a partition of the interval . Let , let denote the center of the segment and let be an approximation of the tangent vector (the finer the partition the better). Then,
(12) 
In practical applications each curve will be represented by a finite addition which is an approximation to the vector field.
If we suppose we have a triangulation of where each triangle is represented by the vector field where and ( is the normal vector to the triangle, whose norm encodes the area of the triangle); then,
(13) 
The finite addition tends towards the integral as the mesh is refined. This finite addition is an approximation to the vector field , and it will be the representation which will be used as a consequence of its computational simplicity.
Consequently, we deal with curves and surfaces as vector fields in . Thus, the distance between two surfaces (or curves) is defined as the distance between the corresponding elements in ; that is, if and are two elements in the RKHS associated with two surfaces and ; then,
(14) 
Using finite approximations to and we have
(15) 
Moreover, the sample mean in the vectorvalued RKHS is calculated in the same way as in Euclidean spaces (HsingEubank15). Given a sample the sample mean is:
(16) 
; however, in general, is not a geometrical current associated with a surface.
The distance between surfaces, obtained from the distance between the corresponding geometrical currents, gives a global estimate of the shape dissimilarity between objects. This distance will be used for the definition of an efficient sizing system for the child population, in which a global dissimilarity shape measure between bodies that does not highlight where the differences occur locally is needed.
3 Choice of the operatorvalued reproducing kernel
The choice of the kernel determines the vectorvalued RKHS and especially its metric. The choice of this metric is therefore crucial and must be adapted to every particular application.
Based on this, we are going to use a particular class of operatorvalued kernel defined as follow:
Definition 3.1
Let be a nonempty subset, and an application. Then, for each , the operator is defined by
where is a symmetric and positive semidefite function, (i.e. and for finite sets , ).
Proposition 3.2
The operatorvalued established in the previous definition is well defined, symmetric and positive semidefinite, so there is a unique RKHS vectorvalued (or simply ) with as its rk.
Proof. Given , the operator is obviously linear. In addition, is bounded because its norm is bounded by :
(18) 
Moreover, as is symmetric and positive semidefinite, immediately has these properties.
Hence, is an operatorvalued reproducing kernel and there is a unique RKHS vectorvalued with as its rk.
Although it is not known how to choose the “best” kernel for a given application, translationinvariant isotropic scalar kernels of the form are often used. In particular the Gaussian function (also called the Gaussian kernel)
(19) 
where is a scale parameter (bandwidth), defines an operatorvalued that is particularly important in the literature, called VectorValued Gaussian, Kernel which fixes a vectorvalued RKHS with as its rk. This fixed vectorvalued RKHS has the following expression (Quangetal10)
(20) 
where
is the Fourier transform of
.Proposition 3.3
Proof. If was in , using the space expression (20), the integral with would be finite and larger than the integral with parameter ,
Hence, . In conclusion, the smaller the value of 0 , the greater the space established.
Remark 3.4
Choice of parameter
Since, the smaller the value of 0 , the greater the space established, (proposition 3.3). It is important to note that the larger , the better the differentiation between geometrical data, because since the test space would be greater, it would be more likely to identify different values that the current has in vector fields. Thus, the smaller the value of , the greater the precision for characterizing geometrical data. For this reason, if is too small, the distance in RKHS detects tiny geometrical details and too much noise could be captured. In conclusion, it is essential to choose a suitable parameter balancing the two previous ideas, so should be the typical scale at which the vector fields may vary spatially.
4 kmeans algorithm in the RKHS space
In this section we review the classic kmeans partitioning algorithm and we comment how adapt it to the RKHS space introduced in the previous section.
Given data points in and a partition of the set of underlying objects, with nonempty classes, let:
(21) 
where denotes the centroid of the data points.
As is well known, the classic kmeans clustering approach looks for a partition of such that a minimum value of is reached. This oneparameter optimization problem is equivalent to the twoparameter optimization problem:
(22) 
where minimization is also w.r.t. all vectors of points from (class representatives, class prototypes).
The kmeans algorithm tries to approximate an optimum partition by iterating the partial minimization steps (Bock2007):
Algorithm 1

STEP 1. Given an initial partition , obtain the centroid vector . Set .

STEP 2. Given a centroid vector , obtain minimizing Eq.(22) with respect to , assigning each point to the class whose centroid has the minimum Euclidean distance to it;

STEP 3. Given , minimize Eq.(22) with respect to , obtaining the new centroid vector , the sample means.

STEP 4. Set and go to STEP 2 until convergence is reached.
By construction, this algorithm yields a sequence of centroids and partitions with decreasing values of the objective function (Eq. 22) that typically converges towards a (typically local) minimum value.
The new centroid vector obtained at each STEP 3 of this algorithm decreases the value of the objective function because of the fact that the sample mean minimizes the Euclidean distance of any point in the cluster.
In this paper, our sample is a set of vector fields () which represent geometrical data by currents. The adaptation of the above algorithm to the RKHS space of these vector fields is straightforward.
In this space the proximity between elements in the sample (Euclidean distance in equations 21 and 22) is measured through distance of vector fields in . Thus, in the functions to minimize in the previous Algorithm, the Euclidean distance must be replaced by the distance given in equation 14.
The sample means of STEP 3 in the previous algorithm are obtained using Eq. 16.
5 Experimental 2D Study
In this section we study the performance of our procedure in a shape classification problem using a known database of synthetic figures called the MPEG7CEShape1 PartB database. It includes binary images grouped into categories like cars, faces, watches, horses and birds with images corresponding to the same item, but showing noticeably different shapes.
To perform this experimental study, three classes from this database of synthetic figures were considered: cars, faces and watches. Each class contains 20 elements (binary images) except the watch class where two of them were rejected (watch2 is an atypical element because of its very large size and watch8 is turned and our theoretical framework considers size and shape). The figures were centered and the contour from each of them defined an oriented smooth curve which was discretized by points () for . Moreover, face figures were rotated by degrees, in order to keep a common horizontal orientation in all synthetic figures (establishing the correspondence with children database, where all the elements are registered and have the same position). For each , from , we defined the centers of the segments and the vectors , , which define the vector field in .
In this experimental study, we were interested in analyzing similar situations to the ones that will appear in our real application in Section 7. We therefore considered two scenarios. In the first, all the synthetic figures were contracted or expanded to reach the same length in the axis, establishing the similarity between this length and the height of a child. Fig. (3) shows an example of an object from each class in this first situation. The points are plotted in black and the vectors from each curve are plotted in different colors.
In the second scenario, half of the synthetic figures from each category were enlarged by a scale factor of 1.5. In this case there were two different ”heights” for each class of figures. Moreover, each figure of the sample was multiplied by a random coefficient ranging between 1 and 1.1, in order to change the ”height” of the figures somewhat.
Fig. (4) shows an example of an object from each group in the second situation, in which there are two ”heights” from each class of shapes.
The kmeans algorithm presented in section 4 was then applied to both scenarios, choosing as the value of the parameter
the standard deviation of the points
, which define the curves of the sample of each scenario. So we defined the Gaussian kernels with in the first scenario and with in the second. In the first scenario the kmeans algorithm recovered the three groups in accordance with the categories of figures in the database. In the second scenario, the means algorithm with =6, grouped together the figures of the database with the same ”height” and shape, as hoped.6 Experimental 3D Study
In this section we describe the experiments carried out with a database of 3D figures, to test the same situations as in the previous 2D database. Now three classes of 3D objects: ellipsoids, spheres and pears are considered. Each class contains 10 elements centered in the origin and the contour from each of them is defined by an oriented smooth triangulated surface immersed in . Each surface , is defined from triangles with barycenter and normal vectors , , being in the vector field associated with each surface.
We consider the same scenarios as above. In the first, (Fig. (5)) all figures have approximately the same length in the Y axis (same ”height”).
Fig. (6) shows the second scenario, in which there are two ”heights” for each class of 3D objects.
The kmeans algorithm was applied in both scenarios. Once again the value of parameter was chosen as the standard deviation of the points , (following the criterion also used for the 2D database). The values obtained were in the first scenario and in the second. Once again, the kmeans algorithm recovered the three groups with the three different geometrical objects in the first scenario, and correctly segmented the figures of the second scenario into six groups, in relation to their ”height” and shape.
7 Application to classifying children’s body shapes
The aim of this section is to show how the aforementioned methods can be used to develop a more efficient apparel sizing system that can increase accommodation of the population, taking into account the child’s body shape. This classification could then be used to choose the most suitable size in a potential online sales application. Before presenting the application, it is fundamental to emphasize a consideration about the current children sizing system, which is based on the sex and the height or the age of buyers, i.e. when a child wants to buy a Tshirt, he/she has to purchase the size associated with his/her height. It is important to observe that this size is designed for a specific body shape; however, there is a great deal of variability in body shapes of children with a same height. For example, if the size associated with the child’s height does not fit as a consequence of his/her body shape, he/she has to buy the previous or next size of Tshirt, which will probably be too short/long for him/her. In conclusion, it is essential to create a new sizing system which takes into account both: size and shape of the body.
A randomly selected sample of Spanish children aged to years old were scanned using a Vitus Smart 3D body scanner from Human Solutions. Each child was situated standing and looking forward; and the body shape was stored as a set of landmarks (homologous points on his/her surface). The children were asked to wear a standard white garment in order to standardize the measurements. From the 3D mesh, several anthropometric measurements were calculated semiautomatically by combining automatic measurements based on geometric characteristic points with a manual review.
For each child, the landmarks observed made it possible to define an oriented smooth triangulated surface immersed in , with a total of triangles and the means algorithm adapted to the RKHS space can be applied.
The question of the number of clusters to choose is a difficult problem in data clustering and in particular, in our application. From the point of view of defining a sizing system, it is not profitable to design many sizes because it would be very expensive for apparel companies. On the other hand, because the objective of an apparel sizing system is to accommodate as large a percentage of the population as possible, it is not reasonable to define too few sizes either.
As there is a different size system for each sex, in order to illustrate our procedure the subset of the girls older than 6 was chosen from the whole data set. Children younger than 6 have difficulties in maintaining a standard position during the scanning process, so they were excluded from our data set. This selection results in a sample of size 195. According to the European standard UNEEN 134023, this age range has 4 different sizes associated with it (11901250 mm, 12501310 mm, 13101370 mm, 13701430 mm).
Consequently, we propose two possible sizing system. The first, aims to divide each height range suggested by the UNEEN 134023 standard into two groups (two sizes). We will thus obtain two different sizes for girls with different body shapes within a common height range of. By using this model, most clothes should fit their buyers. The second sizing system, divides the sample into a lower number of groups (a similar number to that proposed by UNEEN 134023), but bearing in mind the shape and size of the body, not just the height of the children.
Therefore, the body contour from each girl in our dataset is represented by an oriented triangulated smooth surface. For the th triangle of the surface of the th girl, its barycenter and the normal vector to the surface in , , are calculated. Then, the contour of body of a child is associated with the vector field in where mm.
Following the same procedure as with the experimental databases, the kmeans algorithm with k=2 is applied to each group corresponding to each height range (11901250 mm, 12501310 mm, 13101370 mm, 13701430 mm) within the sample. By using , , and , respectively, in each range, we obtain eight groups, which define the first new recommend sizing system. Each size is described using the median values (mm) of the anthropometric measurements of the group, as in Table (1):
Size  Height  Chest length  Waist length  Hip length  Group size 

T1 girl  1209  604  538  656  15 
lower (11901250)  
T2 girl  1227  670  610  739  29 
upper (11901250)  
T3 girl  1273  643  563  688  20 
lower (12501310)  
T4 girl  1282  696.5  643.5  767  31 
upper (12501310)  
T5 girl  1331  644  564.5  701  32 
lower (13101370)  
T6 girl  1346  733  669  807  23 
upper (13101370)  
T7 girl  1393  677  586  750  31 
lower (13701430)  
T8 girl  1410.5  797.5  722  856.5  14 
upper (13701430) 
Fig. (7) shows part of a group of girls who belong to the same height range (13701430 mm). The bodies of the first row of the image are associated with size T7, and the second row corresponds to T8 in the new sizing system.
Moreover, in Fig. (8) it is possible to compare some anthropometric measurements of the two groups obtained within the 11901250mm height range. Something similar occurs in the height.
(a)  (b) 
(c)  (d) 
To reduce the number of sizes in the previous model, a second model is proposed, which is probably cheaper for the clothing companies. It consists of considering all members of the subsample together and applying kmeans algorithm to divide the sample into different groups according to the shape and height of the children.
As mentioned above, the question of which number of clusters to choose is a difficult problems in data clustering. Several methods have been proposed and used in the literature to make this decision Kaufman90; Jain2010.
In our application we combine the ”elbow criterion” with a goodness of clustering measure: the silhouette.
(a)  (b) 

The idea of the ”elbow criterion” is based on plotting the objective function of Eq. 21 against the number of clusters. The first clusters greatly decrease the objective function, but at some point the marginal decrement will drop, giving an angle in the graph. The number of clusters is chosen at this point, hence the ”elbow criterion”.
The silhouette of an object is a measure of how close is the object in the neighboring clusters compared with data within its cluster. A silhouette close to 1 implies the datum is in an appropriate cluster, while a silhouette close to 1 implies the datum is in the wrong cluster. The average of the silhouette gives us a measure of the goodness of a clustering.
In Fig. (9) we can see the plot of the objective function (a) and the silhouette (b) against the number of clusters. Taking into account both criteria, we chose to apply the kmeans algorithm with =5 and . A partition of the sample is therefore obtained according to the height and size of the children. Fig.(10) shows the box diagrams of some anthropometric measurements within each group generated.
(a)  (b) 
(c)  (d) 
Moreover, we define the new sizing system based on these groups using the median of the anthropometric measurements from each group, as shown in Table (2):
Size  Height  Chest length  Waist length  Hip length  Group size 

T1 girl  1241  610  540  661  57 
T2 girl  1259  678  622  754  39 
T3 girl  1362  660  571.5  723.5  56 
T4 girl  1361.5  747.5  673  814  34 
T5 girl  1417  832  767  903  9 
Fig. (11) shows part of a group of girls who belong to the same size T1.
8 Discussion
In this paper we have proposed an approach that represents a novel method in terms of using the currentbased approximation to shape and size analysis in a clustering procedure and it has been applied in order to define a more efficient children’s sizing system. The data are transformed on elements in an RKHS space and the wellknown kmeans clustering algorithm has been adapted to it. An experimental study with simple synthetic objects was successfully conducted to validate the procedure.
We have proposed two ways of defining an efficient sizing system. First, we segmented the data set using height, which is currently the most widely used method. We then applied the kmeans algorithm with the number of sizes established within each class as . In this way, the first segmentation provides a first easy input to choose the size, while the resulting clusters obtained optimize shape classification within each initial input. This first classification would provide a large percentage of accommodation but perhaps an excessively large number of sizes. To reduce the number of sizes, in the second system proposed, the kmeans algorithm is applied to all members of the subsamples and goodness of clustering criteria were applied to choose the optimal number of groups. In both cases, the clustering results have been described in terms of anthropometric dimensions within each group.
Comments
There are no comments yet.