top of page
Search
  • johnmcgaughey255

boundaries and fear

Developing any machine learning program, as the creator, you need to define specific boundaries for the domain and range to function in. The fundamental idea of deep learning is not to just solve one problem, but to solve a set of specific problems. Our goal is to create a general solution to a specific problem. Sure... we could easily create a specific solution to a simple case, but the target of DL and ML is to generalize beyond that specific solution and apply it to a broader domain. Artificial intelligence programs group ideas together based on similarities and based on a reward system. If the AI system sorts a picture into the wrong group it will update its internal parameters in a way that changes the output to the correct group. It really is not intelligent in the way we think about intelligence. One of the most fundamental problems with machine learning are the concepts of overfitting and underfitting. I say these problems are fundamental because they go beyond just machines, it is really one of the core principles that creates the duality of thought. If we are new to the world and not sure about how it all fits together, and we just learning how it goes, we can over generalize very easily, it can also be viewed as ignorance. For example, imagine you are raised in an all white neighborhood, every body you have ever seen has been white and you don't know the difference. You see one black person in the wrong moment robbing the corner store, and you don't know any better so you just assume all black people rob stores. This may seem like a far fetched idea, but it really isn't. The model will produce results according to the range on which the data is spread across; analogous to how we cannot see and infer beyond our current situation with high accuracy.

What I talk about on this blog is ultimately truth that goes beyond my field but has real applications in it. I believe that the way to uncovering different methods of thinking lies in the exploration of truth, at its most fundamental form. I say the problem of over fitting and under fitting goes beyond the machine learning world, it is conceptually on a deeper and more fundamental level of truth. There is a reason that people have fear in their hearts, what drives the hate in humanity? Is it the limited range of input data we receive which in turn makes us unable to accurately infer the behavior of the unknown? Is it that same barrier that creates hate, racism, and genocide? Is it not the same thing as fear, the feedback loop that enforces that barrier in our minds, further enforcing those hateful and misinformed beliefs? Conceptually our minds are like a map, we have familiar places we go, certain routines which we perform inside of the paths we have walked since we turned 10. We are trained to walk these specific paths because we are certain of their safety and they provide us with comfort. We are restricted to our domain of comfort, kept on the path by the fear of falling off of it. The walls of fear are real because at some point we built them, it is important to understand that they exist for a reason, and they have real purpose. Maybe your parents when you were young looked at you differently after you gained a few pounds, scolded you for having a fat face or a belly that sticks out past your belt line. Maybe it is social media and clothing companies telling you that the only way to have worth is to look perfect, skinny, and flawless. The path of comfort is not so comfortable anymore, we want to feel like we are enough so we build new walls of fear at some level in our psyche. These new walls of fear force us, through fear, to conform to what we see as societal standards. At such a young age, when parental and societal influence is at it peak, these forces have the power to even shift your view on reality in a harmful way to yourself to the point where you can't tell what's reality and what's not. If it is so harmful to us, it begs the question, why do people stay on these paths? We cannot infer the states of living for which we have not been exposed to at some level, if the fear in uncertainly outweighs your current predicament, the choice to remain will prevail.

The walls tell you to obey them, to keep walking the path of comfort, because we are afraid of the unknown, the other side of those walls is scary and we do not know what lies ahead.

I believe that we can create a push and pull model that is representative of fear and we will be able, with all of the other correct factors, to create a machine that is better at rational thinking. My route in solving this would be to create a system that is able to think and build conceptual models in order to avoid certain outcomes. The type of machine learning I am thinking about implementing this in would be reinforcement learning. Here is a plan for a little robot that can conquer its environment. One of the core ideas for reinforcement learning is repetition, millions of iterations in order to learn an environment. Lets say that the end goal for a robot is to walk around an environment and be resourceful, building a little house. One method that sim to real use is simulating an environment and training the robot in that simulated environment, then transferring that knowledge and experience to a robot in the real world. Multiple levels of thought abstraction would be used in my design and I would like to introduce some fluency within those layers. Remember, this is all testable, so if I have a hypothesis about a phenomenon, I can test it to see the effective. I think it would be interesting to have the robot algorithm, in the training phase inside of the simulation, develop a conceptual model analogous to fear and almost have it as a simulation on a higher level of thought abstraction. Lets say the machine dies if it falls off of the edge of a table. Conceptually, there should be a wall around that table, and that wall as we move lower in the levels of abstraction will be represented as fear, because it will know there are no actual walls surrounding it. There is some sort of fluidity between the abstractions into how we perceive reality, and the boundaries in those abstractions are very literal, going down in abstraction level converts these abstract walls within it into essentially, direction.





4 views0 comments

Recent Posts

See All

Meta-heuristic optimization

These are all drafts, by the way. They are not meant to be perfect or to convey all the information I wish to convey flawlessly. My blogs are just a way for me to get ideas and my thoughts realized as

Dimensionality reduction and model interpretability

I would say that the main purpose of communication is to give a universal understanding of abstract ideas. An abstraction is, for my intents and purposes, a lower dimensional encoding of a higher dime

bottom of page