Lines in the Declaration of Helsinki, and authorized by the Bioethics Committee of Poznan University of Medical Sciences (resolution 699/09). Informed Consent Statement: Informed consent was obtained from legal guardians of all subjects involved within the study. Acknowledgments: I would like to acknowledge Pawel Koczewski for invaluable assistance in gathering X-ray information and deciding on the correct femur options that determined its configuration. Conflicts of Interest: The author declares no conflict of interest.AbbreviationsThe following abbreviations are applied in this manuscript: CNN CT LA MRI PS RMSE convolutional neural networks computed tomography lengthy axis of femur magnetic resonance imaging patellar surface root mean squared errorAppendix A In this work, contrary to often employed hand engineering, we propose to optimize the structure with the estimator through a heuristic random search in a discrete space of hyperparameters. The hyperparameters will likely be defined as all CNN characteristics selected in the Tetrahydrozoline Neuronal Signaling optimization procedure. The following capabilities are considered as hyperparameters [26]: number of convolution layers, number of neurons in each layer, quantity of completely connected layers, quantity of filters in convolution layer and their size, batch normalization [29], activation function form, pooling form, pooling window size, and probability of dropout [28]. Also, the batch size X too because the studying parameters: understanding element, cooldown, and patience, are treated as hyperparameters, and their values have been optimized simultaneously together with the other folks. What exactly is worth noticing–some from the hyperparameters are numerical (e.g., quantity of layers), although the others are structural (e.g., variety of activation function). This ambiguity is solved by assigning person dimension to each hyperparameter inside the discrete search space. Within this study, 17 different hyperparameters were optimized [26]; thus, a 17-th dimensional search space was made. A single architecture of CNN, denoted as M, is featured by a exceptional set of hyperparameters, and corresponds to 1 point inside the search space. The optimization in the CNN architecture, as a result of the vast space of probable options, is accomplished with all the tree-structured Parzen estimator (TPE) proposed in [41]. The algorithm is initialized with ns start-up iterations of random search. Secondly, in each and every k-th iteration the hyperparameter set Mk is chosen, working with the information and facts from preceding iterations (from 0 to k – 1). The goal of the optimization method should be to locate the CNN model M, which minimizes the assumed optimization criterion (7). In the TPE search, the formerly evaluated models are divided into two groups: with low loss function (20 ) and with high loss function value (80 ). Two probability density functions are modeled: G for CNN models resulting with low loss function, and Z for higher loss function. The next candidate Mk model is chosen to maximize the Expected Improvement (EI) ratio, offered by: EI (Mk ) = P(Mk G ) . P(Mk Z ) (A1)TPE search enables evaluation (education and validation) of Mk , which has the highest probability of low loss function, provided the history of search. The algorithm stopsAppl. Sci. 2021, 11,15 ofafter predefined n iterations. The whole optimization procedure is usually characterized by Algorithm A1. Algorithm A1: CNN structure optimization Result: M, L Initialize empty sets: L = , M = ; Set n and ns n; for k = 1 to n_startup do Random search Mk ; Train Mk and calculate Lk from (7); M Mk ; L L.