For those familiar with announcements being broadcast over a school-wide PA system, that method for distributing general information is lightyears more efficient than individually updating each grade, each class, and certainly each student. Although sometimes annoying, this method of providing updates allowed those who may have been interested to tune in and react on what was relevant to them, while others were able to simply ignore the message and go about their business. Also, it didn’t matter who was making the announcement nor what time the announcement was made, the information was passed along all the same.

Using delegates and events in Unity is essentially the same practice. Information is broadcasted throughout your program, and whichever script and function cares about that broadcast has the ability to automatically react upon it while everything else goes about its business, hopefully. Without this type of programmable PA system, you may be left with either using a terribly performant GameObject.Find(“GameObject You Want”).GetComponent<ComponentYouNeed>(), manually assigning that reference in the inspector and limiting scalability, or having to hard code a reference to each script needing that info, preferably through a singleton. Aside from any performance issues, these approaches drastically limit your ability to modularize and reuse code. This is where Unity’s new input system comes into play.

Previously, if you wanted to create a Fire() method that executed on some type of input, e.g. Input.GetKeyDown(Keycode.“Whatever Key”) or Input.GetMouseButtonDown(“Button #”), then you would constantly check for that input in Update(), and then once that input was detected, Fire() would execute. Pretty simple. However, Fire() now not only has to know about that specific input, but in order to add support for other input devices, it must also know about those inputs. I guess that’s not too terrible if you’re only programming for one platform or input device, but have fun expanding/importing that implementation to other platforms, devices, or projects. Besides, if developing for PC, are you going to want to be the one to tell PCMR that they can’t easily have their choice of inputs?

Instead, the new input system is an event-driven system in which, per Unity, “you only need to bind actions to your code logic and then you can enable different devices and controls visually in the Input Action window.” There are a few different ways of setting it up, and there is a bit of a learning curve in the beginning, but a few key terms important to understand are Input Bindings, Actions, and Action Maps.

· Input Bindings are specific inputs that are mapped and monitored for (e.g. keyboard keys, mouse buttons, controllers, joysticks, etc.)

· Actions are a collection of Input Bindings (e.g. Move, Zoom, Fire, etc.)

· Action Maps are a collection of Actions that can be swapped out depending on what controls are needed at that time (e.g. player, vehicle/land, vehicle/air, or camera controls)

Input Actions window within the inspector.

Each of these are accessible after installing the new system through the Package Manager by creating a new “Input Actions” within your Project Assets folder. Once created, you then essentially set up a method which takes an InputAction.CallbackContext parameter and then assign that method to an Action within the inspector. Now, whenever those particular Input Bindings are activated, the relevant method is called. There’s no hard-coded inputs, and if any changes are needed, you simply alter the Input Bindings.

To briefly go a little deeper, after your Input Action is set up with your Action Map, Actions, and Input Bindings, you have the option to either utilize a Player Input component or register event listeners for the inputs directly via code.

Using the Player Input component is certainly a little easier and less code intensive, and involves simply attaching that component directly to a game object in your scene. You can then assign the Input Action you previously created to “Actions” and also assign “Invoke Unity Events” to “Behavior”. Next, click the dropdown menu for “Events” followed by another dropdown which has the same name as the Action Map for your Input Action. After that, just assign your script to each required Action, set the relevant method(s) within the script, and voila, you’re all set except for the bugs you’ll most likely have to work out!

Actual RTS CameraController in use!

Alternatively, if you wanted to ignore the Player Input component and have everything done through code, select your Input Action in the Project view, click “Generate C# Class”, and either manually input your class file, name, and namespace, or allow Unity to automatically generate those for you. Then click “Apply” and your specific Input Actions class will be generated.

Now, in your own controller script that’s holding the methods which need to listen for inputs, you’ll need to assign a variable to your particular Input Actions class and then register and deregister your events in OnEnable() and OnDisable(). So for example, if you have an OnMove() method, then register it to your Input Action class by using InputActionClass.ActionMap.Action += Controller.OnMove in OnEnable() and -= in OnDisable(). Your actions may instead require knowing when an event is performed and cancelled so you may instead register/deregister .performed and .canceled with your InputActionClass.ActionMap.Action. Finally, be sure to also confirm that your Input Action variable is enabled and disabled in OnEnable() and OnDisable(). Again, bugs and a little further customization aside, this should function similarly to if you had been smart and simply used the Player Input component.

Why not over engineer a bit, especially if it actually works?
Almost finished controller! Pretty smooth, right?

So far, I’ve only been playing around with the new input system for about a week, but I’ve already experienced and benefited from its ease of both setting up initial inputs and altering existing inputs based on game requirements and playability. This isn’t to say that the system is without its frustrations, particularly regarding its first-time use and getting into more extensive Interactions, Processors, and other Action settings, but for advancing modularity and reusable code, Unity seems to be moving in the right direction.




Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ben Mercier

I’m an emergent software developer with a longtime love of games and a growing understanding of how to actually build them.