Computing / Computer Science

Overview: 
Explore what OpModes are, how they work and how to get started creating your own OpModes.
Objectives: 

Understand what OpModes are, the difference between looping and linear OpModes, how we create and use OpModes.

Content: 

The term OpMode or Operational Mode (also Op Mode and opmode) refers to a class located within the FTC SDK (robot controller app source code). You create this class to add your code to the controller app. Your code is really just a part of the controller app, with the rest of the app supplied by the FTC source code. We don't modify that other part of the code, we just create the custom robot behavior we want by adding our own OpModes. Here is a quick video overview of OpModes.

So how do we do this? We create a new class and extend the FTC provided OpMode class. In essence, we add functionality to the base application by adding new OpModes to it. Each "program" we write for our robot is a class that “extends” the OpMode class. A class that is an extension of another class is a descendant or sub-class: it has (inherits) the properties and methods of the original, but those can be changed or added to. We discuss extending classes in this lesson. When a robot is being operated, the driver station is used to select an OpMode and the controller phone runs only that OpMode.

A quick refresher on robot coding. All robot programs are essentially a looping activity. Your code repeatedly runs in a loop obtaining input, acting on that input and doing it again. 

OpModes are of two types, regular and linear. In a regular OpMode, the predefined method loop() is called repeatedly during robot operation. You write code to respond to these calls to loop(). The key idea is that you do not write a "loop" in your code, the base OpMode provides that for you by calling the loop method repeatedly on a timed basis. This is similar to an event based programming model. Your code responds to the "loop" event. This model is somewhat more difficult for beginners to use.

The linear OpMode is a traditional sequential execution model. Your code is started by the base OpMode and runs on its own until execution is over. In this model you must provide the loop in your code. This model is simpler to use and understand. Note that either model is valid and the choice of OpMode is up to the programmer, however the lessons in this Unit will focus on the linear OpMode.

In either case, when you add a new OpMode (a class file) you need to tell the base controller app that you have done so. You do this by using a Java special statement called an Annotation. An Annotation is an instruction to the Java compiler and is used by the FTC SDK to register your OpMode. The Annotation is placed in your code just above the OpMode class name and contains a title for your OpMode and classifies the OpMode as autonomous or teleop. You can further place your OpModes into groups of your choosing. You can temporarily remove an OpMode by adding another Annotation which disables the OpMode. We will show exactly how this is done in the program examples we will be looking at shortly. This registration process is what makes your OpMode visible to the robot controller phone and available to run.

Either type of OpMode can be used to program the two modes of robot control program execution, autonomous and teleop. In autonomous, the robot is not under the control of humans and as such will receive no input from the driver station phone. This mode is timed and is typically 30 seconds long. A timer on your driver station phone can be used to control the autonomous period, stopping your robot for you or you can manually stop the robot by pressing Stop (square black button) on the driver station. Your code will have to make all decisions and handle everything the robot is designed to do in the autonomous period.

In teleop mode, the robot is operated with input from humans via the Xbox controllers attached to the driver station. This mode is typically 2 minutes long and is stopped manually (Stop button) under direction of the match referee. In this mode your code will monitor Xbox controller input and translate that input into robot actions.

Both modes are started by pressing the Init button and then the Start (arrow) button on the driver station when directed by the referee. The Init button is used to tell your (teleop only) code it should perform whatever initialization functions you have programmed and the Start button begins program execution. We will explore these modes of execution and the driver station controls in more detail shortly.

 

Navigation:

Overview: 
Explore the procedures to install the software tools needed to develop Java programs for Tetrix robots.
Objectives: 

Complete the installation of all of the software tools needed to program Tetrix robots with Java.

Content: 

This course will not delve into Tetrix hardware details or discuss how to build the physical robot. It is assumed you will learn about these topics elsewhere. However, here is a refresher (watch first 4:30) on the Tetrix hardware environment. Here is a diagram of the Tetrix system hardware components and a diagram of the basic wiring of the control system.

As we discussed earlier, the Tetrix control system consists of two cell phones, a robot controller phone and a driver station phone. You download the driver station phone application (app) and a test robot controller app from Google Play. Search Play for "FTC Robot". You do not modify the driver station app. You can also download a demo version of the controller app onto the controller phone. This allows you to get some familiarity with the two apps and how the controller phone is configured to know about the specific hardware devices (motors, sensors) that are part of your robot. Here is a lesson (watch from 14:00 to 28:12) on these two apps showing how to operate them.

The source code for the robot controller app is available to you to download and install into Android Studio (AS). This is now you create your own robot control programs, by modifying this FIRST supplied controller app. This source code is called the FTC SDK.

The procedures for installing the software tools you need are discussed in this lesson. Links to the components discussed in the video are below.

Here is a link to download the Java runtime.

Here is a link to download the Java SDK. Download the i586 file for 32-bit Windows, x64 for 64-bit Windows. When installing, you only need the Development Tools, you can X out Source Code and Public JRE.

Here is a link to download the FTC SDK on github. On the github page for the SDK, click the green Clone or Download button. Then click download zip file. As a suggestion, create a folder called FTC Java in your My Documents folder and extract the FTC SDK (ftc_app-master folder) from within the zip file into the FTC Java folder. Then rename that folder to include the FTC SDK version, ie: ftc_app-master-3.4. This will allow you to keep older versions of the SDK and safely install new versions. You should not overlay an existing version with a newer version. You can find the version by scrolling down on the github page to the Release Information section. The version of the SDK will be shown there. If you install a newer version of the SDK, locate the Team Code folder inside the old ftc_app-master-n.n folder with Windows Explorer and copy your code to the Team Code folder in the new version folder.

Here is a link to download Android Studio.

As shown in the video, at the end of the AS install process,  you will be prompted to tell AS what project to start with. You will want to select Import Project (Gradle) and give the folder where you installed your FTC SDK (ftc_app-master-n.n) project. At this point AS will use Gradle to import and analyze your project.

Gradle is the name of the tool used with AS to compile and deploy the robot controller app. On first import of the controller project, Gradle will scan the project and determine what Android components are needed and flag any missing components as errors along with a link you can double click to install the missing item. This scan can take a long time. There can be a number of install operations flagged during the initial Gradle sync. Some of these operations can take a long time to complete. Be patient and complete each flagged install.

After Gradle processing completes, AS will show a blank editing area on the right and the project navigation window on the left. AS may also show a single blank editing area. On the vertical bar left of the navigation window or editing area, select Project. Then on the tabs at the top of the project window, in the view drop down list, select Android. This will give you the simplest view of the project. You should see two main folders, FtcRobotController and TeamCode. FtcRobotController contains the low level FIRST provided components of the robot controller app. You will not need to modify any part of this code. However, the FIRST provided example code is located here. You can open the folders to java then org.firstinspires.ftc.robotcontroller (a package) then folder external.samples to see the example code. This example code is a very valuable resource to learn how to program many robot functions and use various sensor devices once you have completed this course.

The TeamCode folder is where you will put all of your source code. Open that folder and then java and then org.firstinspires.ftc.teamcode which is the folder (also the package) where your code will be.

One final installation step: locate the platform-tools folder in the Android SDK folder, which by default is located at C:\Users\<yourusername>\appdata\local\Android\sdk. From platform-tools copy the files adb.exe, AdbWinApi.dll and AdbWinUsbApi.dll to the C:\Windows folder.

Note: If you are in a class room or other situation where multiple users, with different Windows user names, will share a single PC to work on this curriculum, please see these instructions on Android Studio shared by multiple users on the same PC.

We will learn more about how to use Android Studio in a later lesson.

A great resource to use while working with the FTC platform is the FTC Technology Forum.

Finally, here is a lesson package by Modern Robotics that explores the hardware components in great detail. You don't need to explore this now but you may wish to look at this material later to gain much more detailed information about the hardware components, how they work and what you can do with them. When you visit the Modern Robotics Education web site, you will be prompted to login. Click on guest access below the login boxes to access the site without registering.

Here is a documentation package that discusses using the new REV Robotics Expansion Hub controller instead of the Modern Robotics controllers.

 

 

Navigation:

Overview: 
Introduction to Java programming for the Tetrix platform.
Objectives: 

Understand the main concepts of the Tetrix robot control system hardware and software.

Content: 

This lesson is the first in the "off ramp" Unit for Tetrix programmers. This Unit contains a detailed exploration of writing Java programs for the Tetrix control system. Don't forget to complete the rest of the Java curriculum starting with Unit 12.

We have been learning a lot about the Java programming language. Now its time to explore how we actually write, compile and deploy Java programs for the Tetrix (FTC) robotics control system.

Tetrix based robots use a far more complex control system than the EV3 (FLL) based robots. At the FTC level robots engage in autonomous activity, meaning the robot is not under the control of a human, just like EV3 robots. However, autonomous activity is a relatively small part of the match that is played in competition. The larger portion of match time is teleoperated activity, where the robot is under remote control by human operators. As such, the control system consists of two hardware devices, a robot controller device and a driver station device. The two devices are connected (paired) over a WiFi Direct network. With the Tetrix system, the two devices are Android based cell phones.

The driver station cell phone is fairly straight forward. The software for the driver station is provided by FIRST and is not modified by you. Xbox game controllers plug into the driver station phone and are the input devices for robot control.

The controller cell phone is more complex. This phone is attached to the robot and interfaces with controller hardware that allows the phone to connect to the various robot hardware devices like motors and sensors. You write the software that runs on the controller phone and operates the robot with input from the driver station phone's game controllers.

You can write programs for Tetrix robots with block based programming tools or with Java (discussion). This curriculum only deals with Java. Java programs can be developed on a Windows PC using the Android Studio IDE or directly on the controller phone with OnBot Java. OnBot Java allows you to write Java programs by using a web browser to connect to a Java development tool hosted on the controller phone. This curriculum is focused on using Android Studio to write robot control programs and will not discuss OnBot Java. However, the Java exercises in this curriculum will work if pasted into OnBot Java. You can learn about OnBot Java here.

The software tools we will be using to write Tetrix robot control programs are:

  • Driver Station phone program (phone)
  • Java SDK (PC)
  • Interactive Develpment Environment (PC)
  • Plugins for the Interactive Development Environment (PC)
  • Android Development Kit (PC)
  • Control program SDK from FIRST for FTC (Tetrix) (PC)

We will discuss each of these tools and how to install them in detail in the following lesson.

The Driver Station phone software is provided by FIRST and downloaded from Google Play.

The Java SDK is required on your development PC to be able to compile Java programs.

An Interactive Development Environment (IDE) is a tool that makes it easy to create, compile and deploy programs to devices. Because the robot controller  is an Android cell phone, the control program is actually a phone application. The IDE we will be using is Android Studio. Android Studio (AS) is similar to Eclipse or Visual Studio but is optimized for creating phone applications. There are plugins to AS supplied by FIRST that customize AS for use in developing Tetrix control programs.

The final piece is the FTC SDK provided by FIRST. Since the robot control program is an Android phone application, FIRST has provided a base phone application which handles the details of phone applications and includes the libraries (API) needed to access robot hardware and communicate with the Driver Station phone. The design of this base application allows you to modify the application by simply adding your own classes (called OpModes) to the base application. The base application hides the details of Android phone applications so you can focus on programming your robot. The base phone application does not do any robot control, that is the resposibility of the classes you add. This base phone application is delivered to you as an Android Studio project that generates the phone application. This project, for the base phone application is referred to as the FTC SDK.

 

Navigation:

Overview: 
Explore using two sensors, UltraSonic and Gryo, to detect and avoid obstacles while driving.
Objectives: 

Understand how to use sensors to program your robot to avoid obstacles while driving.

Content: 

We have looked at test programs for several sensors. Now lets use two sensors to create a practical example of using sensors. We will take the simple driving sample and use an UltraSonic sensor to detect obstacles in the path of the robot and then use the Gyro sensor to execute a 90 degree turn to avoid an obstacle and continue driving. A Touch sensor is used as a way to stop the program along with the escape key on the EV3.

Create a new package called ev3.exercises.driveAvoid. Create a new class in that package called DriveAvoid and copy the following code into that class:

This program will drive the robot and if it detects an obstacle in its path it will make a 90 degree right turn and continue driving. It will drive until the escape key or touch sensor is pressed.

 

Navigation:

Overview: 
Explore how to use the Color sensor.
Objectives: 

Learn how to use the Color sensor.

Content: 

The Color Sensor is used to determine the amount of light reflected from a surface and also the color of the reflected light. The Color Sensor is typically used in line following applications where the surface is the table the robot is operating on. The Color Sensor must be close to the surface, usually about 1 cm to work well. The sensor has a multi-color LED (called the floodlight) that can be used to illuminate the surface. The Color sensor is more complicated than the other sensors. It has several modes of operation:

Mode Description
ColorID Returns a numeric value that maps to a single color. Values can be found in the lejos.robotics.Color class. Only recognizes basic colors.
Red Returns light level (brightness) of Red light. Red floodlight LED should be turned on. Red light offers better detection of light levels.
RGB Returns a  lejos.robotics.Color object with the Red, Green and Blue values set according to the brightness (intensity) of those colors detected.
Ambient Returns the ambient light level detected.

You must select the appropriate mode for your application and you will probably need to experiment to determine which mode works best.

As we have done with the other sensors, we have a library class called ColorSensor that simplifies using the EV3ColorSensor. Create a new class called ColorSensor in the library package and copy this code into that class.

Now create a new package called ev3.exercises.colorDemo and in that package add a class called ColorDemo. Copy the following code into that class:

This program demonstrates each mode of the Color sensor. After the wait for start, the ambient light intensity is displayed. You can hold the EV3 in your hand or better yet put on various surfaces to see the values returned by the sensor. When done with ambient, press the escape button to move to the next mode. That mode measures the red light intensity with the red LED turned on. Press the escape button again to move to displaying the RGB color detected. Note we turn on the white light on the LED to better detect actual surface color. Press the escape button again to move to detection of a single color value. This color value is a numeric value, so to make our lives easier, the ColorSensor class has a method to convert the numeric color value to a color name.

 

Navigation:

Overview: 
Explore using a Gryo (gyroscope) sensor to determine and alter robot direction of travel.
Objectives: 

Learn how to to use a Gyro (gyroscope) sensor to determine robot direction of travel and how you can use it to control direction of travel.

Content: 

A Gyro (gyroscope) sensor can be used to monitor robot direction of travel. You can use a Gyro sensor to make sure you travel in a straight line or to control turns accurately. A Gyro sensor reports any direction change by your robot and that information can be used to control direction of travel.

As we did for the Touch and UltraSonic sensors, we are going to use a library class that wraps the EV3GryoSensor class and exposes simpler methods to use in our programs. Create a new class file called GyroSensor in the library package and copy this code into that class. You should study this class to see how the EV3GyroSensor is used to return direction information to your programs.

Once that is done, create a new package called ev3.exercises.gyroDemo. Then add a class called GryoDemo to the package, Copy the following code into the new class file:

This program will set the Gryo heading to zero at start up and then display any direction change in degrees (positive when turning left, negative when turning right) from the zero point. Hold your EV3 in your hand and rotate it to see the angle from the starting point displayed. The rate of change of the angle is also displayed. Note that the angle is cumulative. If you turn right 45 degrees and turn  turn further right you will see an angle of 90 degrees. The angle will get larger or smaller as you rotate the EV3. You can call the reset() method to set the current direction of the EV3 to zero and start measuring any direction change from the new heading.

 

Navigation:

Overview: 
Explore the use of the EV3 UltraSonic distance sensor.
Objectives: 

Learn how to use the EV3 UltraSonic distance sensor to detect objects in  the robot's environment. Learn more about utility classes.

Content: 

In order for your robots to navigate in thier environment, you may need to detect obstacles or objects and maneuver in relation to these objects. One way to do that is with the EV3 UltraSonic distance sensor. This sensor uses sound waves to detect objects and measure  the distance to them.

Before we get to that, we will add an additional library class, Lcd. This class exposes simple methods to display text on the EV3 LCD screen. This will be useful in the sensor demo programs. Create a new class file in the library package called Lcd and copy this code into that class just as you did earlier with the Logging library class. You should look it over and see how it works and how it provides useful methods to make your programs easier to write.

Next, just as we did for the Touch Sensor, we are going to use a library class to "wrap" or simplify using the UltraSonic sensor. Create a new class file in the library package called UltaSonicSensor and copy this code into that class. You should study this class and see how it operates the EV3UltrasonicSensor class and exposes a simple set of methods that make using the sensor in your code easier.

With those library classes ready, we can move to a demo program showing the UltraSonic sensor in action. Create a new package called ev3.exercises.ultraSonicDemo. In that package create a new class called UltraSonicDemo and copy the following code into it:

When you run this program will see on the LCD a display of the distance to any object detected by the distance sensor. You can put your hand in front of the sensor and see the distance in meters between the sensor and your hand. When you bring your hand within a quarter of a meter the program will stop.

You can see how the Lcd class is used to display information on the Lcd in fixed positions and make it easy for you to observe what is happening in real time.

 

Navigation:

Overview: 
Explore the Regulated motor class.
Objectives: 

Understand the difference between the Regulated and UnRegulated motor classes. Understand how to use Regulated motors.

Content: 

The EV3 supports a variety of motors. The EV3 kit comes with two types of motors, large and medium. This relates to their size and power. The leJOS API contains two main types of motor control classes, regulated and unregulated. Regulated motors have two classes, EV3LargeReguatedMotor and EV3MediumRegulatedMotor. Unregulated motors have one class, UnregulatedMotor.

UnregulatedMotor can be used with either the large or medium motors. Unregulated motor speeds are controlled by power level only. Regulated motors use rotational speed and rotational targets (angles) to control thier speed and use the motor's internal tachometers to make sure speeds and rotations are accurate. Regulated motors can be set to rotate at a speed set as degrees per second. These motors can also be set to rotate a specific number of degrees then stop. These motors can be set to run on thier own to these rotational targets while your code continues with other tasks. These classes have a number of methods that allow you monitor the motor's operation.

Here is an example of doing motor control with regulated motor classes. Create a new package called ev3.exercises.driveRegulated. Then create a class called DriveRegulated in that package and paste the code below into that class.

When you test this code, you can hold your robot and press the enter button at each phase to see how the motors operate. You can then un-comment the SetAcceleration() methods to see how this affects motor starting and stopping.

The example shows controlling motor speed with setSpeed() which sets speed in degrees per second and rotate()  / rotateTo() which cause the motor to turn the specified amount in degrees and then stop.

 

Navigation:

Overview: 
Explore the idea of code reuse by putting commonly used functions in a class that can be shared among many projects.
Objectives: 

Understand library classes and how they facilitate code reuse. Demonstrate how to create and use a library class.

Content: 

Many times we write code that will be repeated in a single project or code that is repeated in different projects. An example of this is the code to read the touch sensor in the DriveCircle project. This code would be repeated in every project that uses a touch sensor. It can be useful to put utility code, that is code that be used in several places or several projects into classes that can be called upon whenever needed. Another benefit is if we need to change how we handle the touch sensor. If the sensor code was repeated in every project, we would need to go to every project and change the code. If we have that repeated code in a utility class, then we only need to change it in one place.

Here is an example of taking the code that handles the touch sensor in the DriveCircle project and puts it into a utility class called TouchSensor. If you have not already done it, create a new package called ev3.exercises.library. Library is just a name we selected, you could call the package utilities or tools or whatever makes sense to you. In that package create a class called TouchSensor and copy/paste this code into it:

We have created a class that handles the details of the touch sensor. Any class that uses a touch sensor can create an instance of this class and monitor the touch sensor without having to worry about the details of how the sensor is handled. Note the comments before each method. These are JavaDoc comments and Eclipse will show this information when you are working with the class in other locations.

Now lets create a new project package and class to hold the DriveCircle2 exercise shown below. This is the DriveCircle exercise that uses the new TouchSensor class instead of the original code to handle the touch sensor.

So you can see we just create an instance of the TouchSensor class and make use of it. The DriveCircle2 class is simpler and if we update the TouchSensor class, DriveCircle2 will not have to be changed in terms of the details of how Touch Sensors are handled, but it will need to be recompiled to incorporate the updated library class.

 

Navigation:

Pages