Characteristics of a secure system
A communication system has to provide certain guarantees to be called secure. While there are certainly a lot of characteristics which are wishful in a communication system, the requirements for a secure system can be summarized as confidentiality, integrity and authenticity. These three characteristics have to hold, if one of them is weak or even missing, the whole system cannot be considered secure.
Security has to be distinguished from clandestinity. A secure system is not required to hide the fact that communication is taking place or to obfuscate it's participants. That would constitute a different requirement on a communication system.
Also not included are mechanisms to contain the consequences of a breach. Perfect forward secrecy, future secrecy, plausible deniability are methods to limit the effect of a compromise - how much an attacker gains by knowing the encryption secret (the key) at some point in time. For a system to by called secure it's not required to implement them.
Although this definition of security might seem very limited and real world application will certainly also implement the other mentioned characteristics, it helps reasoning about a system if the definition is clear cut and doesn't include neighboring concerns.
Confidentiality
Confidentiality means that only the intended receivers of a message are able to read it. Everybody else should only see an undecipherable lump of seemingly random characters. This is where encryption algorithms are used. For an in depth discussion of encryption see the background text on public key cryptography (TODO: Link). Encryption traditionally cares about the content of a message. Sender, receiver, time of sending, size of the message - all the meta information about the communication itself is not addressed by encryption.
This ignores the fact that the mere communication between two users might allow an educated guess of the content.
Integrity
Integrity ensures that messages reach their intended receivers unmodified. Any modification done to a message while in transit should be immediately obvious.
This looks like it's already achieved by encryption, because a successful modification requires the knowledge of the encryption key. Without that key a modification would corrupt the message. If the message was a text, the damage would be obvious - if it was a binary, the damage might even represent valid content. Another attack would be to simply append an old, intercepted message to a new one. Depending on how the encryption works in detail, this might go unnoticed since the encryption key would be the right one.
To decouple the guaranty of integrity from the applied encryption - the guaranty of confidentiality - actual systems use checksums of the content to detect modifications.
Authenticity
Authenticity is about the certainty that a message actually comes from it's alleged sender. Impersonation should be impossible or obvious if it's attempted. More generally authenticity protects against a man-in-the-middle attack. Or spelled the other way around: Man-in-the-middle attacks are basically attempts to subvert the methods of guarantying authenticity within a communication system.
This is usually realized by adding a piece of data to the communication which only the alleged sender can create, but everyone else can validate.
Open source and reproducability
While open source and reproducability are not part of the requirements for a secure system, for all practical purposes it's better to be able to validate the promises of a software vendor than be forced to blindly believe them.
Being open source of course doesn't imply that a validation (aka code audit) has taken place, that the software uphold it's promises and is free of bugs. In contrast to closed source software (which shares the same problems), there is at least the possibility.