Back-end Engineering Articles

I write and talk about backend stuff like Ruby, Ruby On Rails, Databases, Testing, Architecture / Infrastructure / System Design, Cloud, DevOps, Backgroud Jobs, and more...

Twitter:
@daniel_moralesp

2019-04-12

What's Ruby (Programming Language)?

So far, we've been talking about the initial setup or configurations you've to have in your machine just to start creating your code in any programming language you want. For that reason, we have talked about Versioning, Package Managers, different Operating Systems, Text Editors, Console or Terminal, and Git & GitHub. This stuff seems irrelevantly separated, but now we have to make sense of all of this once we see a programming language like Ruby.

But to understand Ruby, as always, we'll need to figure out the primary and theoretical concepts behind it. With that, we'll have enough context about what we want to study today.

What's a programming language?

First things first, what's a programming language? A programming language is any set of rules that convert strings, or graphical program elements in the case of visual programming language, to various kinds of machine code output. Programming languages are one kind of computer language used in computer programming to implement algorithms.

Most programming languages consist of instructions for computers. However, programmable machines use a set of specific instructions rather than general programming languages. 

Thousands of different programming languages have been created, making more every year. Many programming languages are written in an imperative form (i.e., as a sequence of operations to perform). In contrast, other languages use the declarative form (i.e., the desired result is specified, not how to achieve it).

The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning), which are generally defined by a formal language. In addition, some languages are characterized by a specification document (for example, the C programming language is specified by an ISO Standard). In contrast, other languages (such as Perl) have a dominant implementation treated as a reference. Finally, some languages have both, with the primary language defined by a standard and common extensions taken from the dominant implementation.

Programming language theory is a subfield of computer science that deals with programming languages' design, implementation, analysis, characterization, and classification. If you want to study more in detail this theory, please refer to this link: https://en.wikipedia.org/wiki/Programming_language.


What's an algorithm?

Now that we know a bit more about programming languages, the next question is what an algorithm is. IS is far enough to say that the algorithms are the set of instructions we create with a programming language. It's like studying a new language like French: you have all the possible vocabulary at your hand (that's the programming language), but you start building stories. Each story is like an algorithm. If you want to tell a story to your friends in French now, you have to create an algorithm (create your account) that makes sense and get some output.

More theoretically speaking, In computer science, an algorithm is a finite sequence of well-defined instructions, typically used to solve a class of specific problems or perform a computation.

Algorithms are used as specifications for performing calculations, data processing, automated reasoning, automated decision-making, and other tasks. In contrast, a heuristic is an approach to problem-solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result.

As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive conditions, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. If you want to know more details, please hit this link.


What's the difference between compiled languages and interpreted languages?

Another important concept we've to know inside the programming world is the difference between compiled and interpreted languages. To understand this concept, we have to pay attention to the most crucial step here: "Compilation." 

Let's make an example of a compiled language

  • * You've been writing code all day long. Now, at the end of the day, you have 1.000 new lines of code
  • * Working in a compiling language, you're ready to "pack" or "compile" all of that code to ship and test their behavior
  • * You do the "build" instruction, which is the instruction that helps you to compile all of that code
  • * Once the compilation is ready, you can send the code now to the machine to get a result

Let's do the same example but with an interpreted language
  • * You've been writing code for the last hour; you have 25 new lines of code
  • * Working in an interpreted language, you have to "execute" the code
  • * You get the result, so you can see if everything is working right



We can now see some differences and similarities between the two. But take care, each one of them has its own pros and cons, so it's better to understand both in more detail. BTW, Ruby is an Interpreted Language. 

The Compiler

A compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a lower-level language (e.g., assembly language, object code, or machine code) to create an executable program.

A compiled language is a programming language whose implementations are typically compilers (translators that generate machine code from source code), not interpreters (step-by-step executors of source code, where no pre-runtime translation occurs).

The term is somewhat vague. In principle, any language can be implemented with a compiler or an interpreter. Combining both solutions is also common: a compiler can translate the source code into some intermediate form (often called p-code or bytecode), which is then passed to an interpreter who executes it.

The Interpreter


An interpreter is a computer program that directly executes instructions written in a programming or scripting language without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution:

  • * Parse the source code and perform its behavior directly;
  • * Translate source code into some efficient intermediate representation or object code and immediately execute that;
  • * Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter Virtual Machine.


While interpretation and compilation are the two primary means of implementing programming languages, they are not mutually exclusive, as most interpreting systems also perform some translation work, just like compilers. The terms "interpreted language" or "compiled language" signifies that that language's canonical implementation is an interpreter or a compiler, respectively. A high-level language is ideally an abstraction independent of particular implementations. For more details about this, please refer to this link.

Also, you can find a good blog post about more differences here: https://www.freecodecamp.org/news/compiled-versus-interpreted-languages/.


What’s Ruby?

With all of this stuff related to compiled and interpreted languages clearly, we can now define and better define what Ruby is: Ruby is an interpreted, high-level, general-purpose programming language that supports multiple programming paradigms. It was designed with an emphasis on programming productivity and simplicity. In Ruby, everything is an object, including primitive data types. In Japan, it was developed in the mid-1990s by Yukihiro "Matz" Matsumoto.

Ruby is dynamically typed and uses garbage collection and just-in-time compilation. It supports multiple programming paradigms, including procedural, object-oriented, and functional programming. The creator influenced Ruby by Perl, Smalltalk, Eiffel, Ada, BASIC, and Lisp.

  • I know, a lot of new fancy keywords here to define the language; let's digest it a little bit:

    • * High-level programming language: a high-level programming language is a programming language with strong abstraction from the details of the computer. In contrast to low-level programming languages, it may use natural language elements, be easier to use, or may automate (or even hide entirely) significant areas of computing systems (e.g., memory management), making the process of developing a program more straightforward and more understandable than when using a lower-level language. The amount of abstraction provided defines how "high-level" a programming language is

    • * General-purpose programming language: a general-purpose programming language is a programming language designed to be used for building software in a wide variety of application domains, like web apps, command-line tools, and others

    • * Programming paradigms: Programming paradigms are a way to classify programming languages based on their features. Languages can be classified into multiple paradigms. Some paradigms are concerned mainly with implications for the language's execution model, such as allowing side effects or whether the execution model defines the sequence of operations. Other paradigms are concerned mainly with how code is organized, such as grouping a code into units along with the state that is modified by the code. Yet others are primarily concerned with the style of syntax and grammar

    • * Everything is an object: here we're talking about Object-Oriented Programming Languages. Object-Oriented Programming (OOP) is a programming paradigm that relies on the concept of classes and objects. It is used to structure a software program into simple, reusable code blueprints (usually called classes), which are used to create individual instances of objects. So, for example, practically everything in Ruby is an Object, except control structures. So a method or code block is represented as Objects and can be thought of as such.

    • * Dynamically typed: a dynamic programming language is a class of high-level programming languages, which at runtime execute many common programming behaviors that static programming languages perform during compilation

    • * Garbage collection: is a form of automatic memory management. The garbage collector attempts to reclaim memory allocated by the program but is no longer referenced—also called garbage. 

    • * Just-in-time compilation: just-in-time (JIT) compilation (also dynamic translation or runtime compilations) is a way of executing computer code that involves compilation during the execution of a program (at run time) rather than before implementation. This may consist of source code translation but is more commonly bytecode translation to machine code, which is then executed directly. A system implementing a JIT compiler typically continuously analyses the code being executed and identifies parts of the code where the speedup gained from compilation or recompilation would outweigh the overhead of compiling that code.
 
As you can see, there are a lot of terms we refer to here, and maybe we'll need more, but for now, it is evident what kind of category of programming language is Ruby. With all of this theory in place, now we can start coding some basic commands in Ruby. 

I hope you learned a lot; see you in the next post.

Thanks for reading
Daniel Morales