top of page
Search
  • johnmcgaughey255

Subsequent preferences

Let us train a convolutional neural network to classify certain bugs. It is sufficient to pass an image through a CNN and back propagate through the network according to the difference of squares within the network. The way this works, is we take an image of a bug and pass it through the network through a series of kernel convolutions. Each layer building on top of the last, it will build a representation of what each bug is supposed to look like. Maybe there is a 'higher truth' to bugs that the machine does not know about, about how the structure of their exoskeletons, or colors, interrelate. As it turns out, there is a conceptual structure known as a phylogenetic tree, that describes how closely related species are. Would it be possible for the machine to learn this kind of pattern along with the original goal of learning patterns to classify bugs?

This evolutionary hierarchy is a subsequent preference to the main goal of the machine, it is a mechanism to improve the functionality of the network based on something the engineers believe to be a truth. By accepting the phylogenetic tree as a principle of how the physical structure of bugs relate to each other, we can use this structure to train the network to form 'truer' representations of bugs and the differences between them. We should exploit things we think are true about the environment to further our understanding of it. The main task of the model, classification, should be held at a priority in the task of classification, and the use of these truths we find out about the data should be held to a lesser importance in data processing. Also, these subsequent preferences do not take the same form often and cannot be integrated into the system in the same way as others.

A way the phylogenetic tree can be integrated into the model to guide it into the model could be a leniency in the classification of the species. We really want to teach the machine: "Its ok if these species look really similar, they are supposed to; its more respectable to mistake closely related species than to mistake species that are not as related". Especially at the beginning of the training process, when results are more random, to provide some initial secondary function for the model building an internal model for how bugs are related.

2 views0 comments

Recent Posts

See All

Meta-heuristic optimization

These are all drafts, by the way. They are not meant to be perfect or to convey all the information I wish to convey flawlessly. My blogs are just a way for me to get ideas and my thoughts realized as

Dimensionality reduction and model interpretability

I would say that the main purpose of communication is to give a universal understanding of abstract ideas. An abstraction is, for my intents and purposes, a lower dimensional encoding of a higher dime

bottom of page