A Simple Progress Bar in Python

Recently, I have been working with the Requests library in Python. I wrote a simple function to pull down a file that took more than a minute to download. While waiting for the download to complete I realized it would be nice to have some insight into the download’s progress. A quick search on StackOverflow led to an excellent example. Below is a simple way to display a progress bar while downloading a file.

def download_file(url, name):
    '''
    Function takes a url and a filename, creates a request, opens a 
    file and streams the content in chunks to the file system.
    It then writes out an '=' symbol for every two percent of the total
    content length to the console.  
    '''
    filename = 'myfile_' + str(name) + '.ext'
    r = requests.get(url, stream=True)
    with open(filename, 'wb') as f:

        total_length = r.headers.get('Content-Length')

        if total_length is None:  # no content length header
            f.write(r.content)
        else:
            downloaded = 0
            total_length = int(total_length)
            for data in r.iter_content(chunk_size=4096):
                downloaded += len(data)
                f.write(data)
                done = int(50 * dl / total_length)
                sys.stdout.write("\r[%s%s]" % ('=' * done, ' ' * (50 - done)))
                sys.stdout.flush()

    return 1

What’s going on?

requests.get() takes a URL and creates an HTTP request. The stream=True flag is an optional argument that can be submitted to the Request class. It lets the Request know that the content should be downloaded in chunks instead of attempted to be pulled all at once.

The response headers are then searched for the ‘Content-Length’ attribute. We use the ‘Content-Length’ value to calculate how much is downloaded and what is left to download. The values are then stored in variables and updated as the chunks are processed.

The final piece to point out in this little function is the iter_content() method. iter_content():

Iterates over the response data. When stream=True is set on the request, this avoids reading the content at once into memory for large responses. The chunk size is the number of bytes it should read into memory.

This helps handle larger files and gives us a way to track progress. As chunks are processed, variables can be updated. If you do not need or want to roll your own, check out the tdqm library.

Logic for Artificial Intelligence

“Logic has both seductive advantages and bothersome disadvantages.”

Patrick Winston, Artificial Intelligence, pp 283

Logic in artificial intelligence can be used to help an agent create rules of inference. It provides a formal framework for creating if-then statements. Formal logic statements can be difficult for beginners because of the symbols and vocabulary used. Below is a cheat sheet for some of the basic symbols and definitions.

SymbolDefinition
Logical conjunction. In most instances it will be used as an AND operator.
Logical disjunction.  In most instances it will be used as an ORoperator.
Universal quantifier. Placed in front of a statement that includes ALL entities in the agent’s universe. 
Existential quantifier. Placed in front of a statement where it applies to at least one entity in the agent’s universe.
¬ Negation. The statement is only true if the condition is false.
WordDefinition
ConjunctionAnd. Means the truth of a set of operands is true if and only if all of its operands are true. Symbol used to represent this operator is typically ∧ or &.
ConjuctAn operand of a conjunction.
DisjunctionOr. Means the truth of a set of operands is true if and only if one or more of its operands is true.
DisjunctAn operand of a disjunct. 
PredicatesA boolean valued function or a relationship. A ∧ B = True or A and B have a specific relationship.
modus ponusThe rule of inference. Given A is true and B is true then (A and B) is true
monotonicA property that states a “function is monotonic if, for every combination of inputs, switching one of the inputs from false to true can only cause the output to switch from false to true and not from true to false”

Logic focuses on using knowledge in a provable and correct way. When it is used in AI it does not prove out that the claims are true. If an agent is taught that all birds can fly, it will be able to use logic to infer that a dog is not a bird. However, it will run into problems when classifying a penguin. 

It is important to keep in mind that logic is a weak representation of certain kinds of knowledge. The difference between water and ice is an example of knowledge that would be difficult to represent using logic. Determining how good a “deal” is would also be better suited to a different knowledge representation. If dealing with a change of state or ranking options, using a different knowledge system would be more appropriate.

Qualia

Have you ever tried to describe the color red to someone who suffers from protanopia, deuteranopia, protanomaly, or deuteranomaly? It is nearly impossible since those who are red-green color blind are missing the corresponding photoreceptors. The experience of seeing red is so familiar to those who have experienced it. And that type of experience, one which is difficult to communicate, does not change based on other experiences, is unique to the individual experiencing it, and immediately recognized, is qualia. 

Frank Jackson offered the following definition of qualia;

Photo by Christian Stahl on Unsplash

[Qualia are] certain features of the bodily sensations especially, but also of certain perceptual experiences, which no amount of purely physical information includes.”

A few years later another philosopher/cognitive scientist Daniel Dennett, identified four properties ascribed to qualia; “ineffable”, “intrinsic”, “private”, and “directly or immediately apprehensible in consciousness” (Tye 2002, 447). In simpler language, qualia is a word describing the properties associated with how something was experienced. The qualia of “seeing red” may be difficult to describe on their own but when compared to the qualia of “seeing green” they can be conceptualized and contrasted. That is often called “spectrum inversion” and a famous example was presented by John Locke in “Of True and False Ideas” :

Portrait of John Locke
By Godfrey Kneller – State Hermitage Museum, St. Petersburg, Russia., Public Domain, https://commons.wikimedia.org/w/index.php?curid=1554640

Neither would it carry any Imputation of Falshood to our simple Ideas, if by the different Structure of our Organs, it were so ordered, That the same Object should produce in several Men’s Minds different Ideas at the same time; v.g. if the Idea, that a Violet produced in one Man’s Mind by his Eyes, were the same that a Marigold produces in another Man’s, and vice versâ. For since this could never be known: because one Man’s Mind could not pass into another Man’s Body, to perceive, what Appearances were produced by those Organs; neither the Ideas hereby, nor the Names, would be at all confounded, or any Falshood be in either. For all Things, that had the Texture of a Violet, producing constantly the Idea, which he called Blue, and those which had the Texture of a Marigold, producing constantly the Idea, which he as constantly called Yellow, whatever those Appearances were in his Mind; he would be able as regularly to distinguish Things for his Use by those Appearances, and understand, and signify those distinctions, marked by the Names Blue and Yellow, as if the Appearances, or Ideas in his Mind, received from those two Flowers, were exactly the same, with the Ideas in other Men’s Minds. 

(Byrne 2016)

These are very difficult concepts to teach an artificially intelligent agent. Concepts with very formal representations, like triangle or even something like a reptile are easier for artificially intelligent agents to differentiate. Things like “tastes salty” or “splitting headache” are very difficult to transfer to a learning agent since they are extremely personal. Whether or not quale exists is actively debated in the philosophical community. Especially in arguments around consciousness and self. I think that is why the exploration of qualia is so interesting to the development of AI. 

Girl touching the hand of a robot.
Photo by Andy Kelly on Unsplash

I think people like to explore what it could be like for robots with general intelligence to be sentient. But to develop those qualities, the engineer must examine the meta-cognitive processes that make up the human experience. This task is complicated and one that philosophers still argue about. What does it mean to be human? That question will need to be explored further in order to get closer to general artificial intelligence. In the mean time I invite you to make a mental note of the next time you try and describe qualia to someone else. What analogies did you use? How similar do you think the same experience is for both of you? 

References

  • Byrne, Alex, “Inverted Qualia”, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/qualia-inverted/>.
  • Tye, M., 2002, “Visual Qualia and Visual Content Revisited”, in Philosophy of Mind, D. Chalmers (ed.), Oxford: Oxford University Press.

A Brief Introduction to Bayes’ Theorem

Bayes’ Theorem stated is, “the conditional probability of A given B is the conditional probability of B given A scaled by the relative probability of A compared to B”. I find it easier to understand through a practical explanation. Let’s say you are having a medical test performed at the recommendation of your doctor, who recommends tests to everyone because they get a nice kickback and college tuition is not cheap! You are young and healthy and are being tested for the existence of a new form of cancer that only exists in 1% of the population. These cancer detecting tests accurately detect the cancer 8 out of 10 times in an infected individual. However, they “detect” cancer in 1 out of 10 cancer free patients. Your test results come back positive! But before you get worried, let’s figure out the chance that you actually have cancer.

This is a job for conditional probability. You want to know the probability that the test detected cancer in you, a young healthy individual. “The chance of an event is the number of ways it could happen given all possible outcomes”[1]:

Probability = event / all possibilities

or

Bayes' Equation
Bayes’ Theorem

 

When the result is considered in conjunction with the likelihood of other outcomes, it is not that troubling. The table below has the likelihood for each of the outcomes:

Cancer (1% of Pop.) No Cancer (99% of Pop.)
Test Positive True Positive
1% x 80% = .8%
False Positive
99% x 10% = 9.9%
Test Negative False Negative
1% x 20% = .2%
True Negative
99% x 90% = 89.1%

 

The chance of cancer for a true positive is only .8%! A false positive is much more likely at 9.9%. The likelihood you have cancer even with a positive test result is low. You should definitely seek a second opinion.

It is important to keep in mind that we are calculating odds of an event given all possibilities. You probably do rough versions of this calculation daily. “Given the dark rain clouds outside and rain in the forecast, I will take an umbrella since I believe it will rain while I am out.” If that is not enough to get you excited about Bayes and his contribution to statistics, know that he did it all in an effort to prove the existence of God! If you would like to learn more, check out the links below.

[1]A primer on Bayes theorem which I used as inspiration: https://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/

The peer reviewed “wiki” entry on Bayesian statistics: http://www.scholarpedia.org/article/Bayesian_statistics

Stanford encyclopedia of Philosophy entry on Bayes Theorem: https://plato.stanford.edu/entries/bayes-theorem/