domenica 29 giugno 2014

Theory of Stupidy



The pleasure of working with smart colleagues is wonderful.
Nothing pays more (professionally speaking) than the moment in which you manage to perform a good job, feeling a great synchrony with a smart colleague.
The moment in which you notice that it is enough to exchange two sentences in order to comunicate a very complicated concept and the security of being understood.

The possibility of meeting a smart colleague who enrich you can be very hard.

On the contrary, the possibility of meeting a dumb colleague may be very high! The colleague making his own life harder without any (logical) reason, the colleague who generates that piece of code that you will admire for ever.

We always have to deal with stupidity, in our every day life.
Is there any tool which can help us? Well, some years ago I read an inspiring book!

"Allegro ma non troppo" by Carlo M. Cipolla.

The book is about a scientific analysis of the human stupidity. It is proposed an illuminating mathematical model and 5 theorems.

If anybody out there is reading this blog, you know that this blog is meant to be short! I am therefore proposing here just the laws and the model. If you will be touched by such illumination, then I suggest you to read the book.



For the pictures I thank this website:
http://nicholasbordas.com/archives_posts/what-if-we-didnt-underestimate-stupidity












venerdì 20 giugno 2014

Project management triangle

The basic rule of every project!

"If you move one of the vertex, be ready to move also the others!".

If you want to have more scope (more features or more quality), then be ready to require more time.
If you want to reduce the time, then be ready to increase the cost.
If you do not want to reduce your cost, then give up with your feature.



giovedì 19 giugno 2014

Dale Carnegie





I have just read a very interesting book:
"How to Win Friends and Influence People" of Dale Carnegie.

I have to say it is a great book! I have been really hit very hard inside.
Nice part of this book is that it has a very short sentence with a great recap power.
I am currently reading another of his book: "How to stop worring and start living".

I am posting now his "Golden Rules" in order to have a quick reference:



Become a Friendlier Person
1. Don’t criticize, condemn or complain.
2. Give honest, sincere appreciation.
3. Arouse in the other person an eager want.
4. Become genuinely interested in other people.
5. Smile.
6. Remember that a person’s name is to that person the sweetest and most important sound in any language.
7. Be a good listener. Encourage others to talk about themselves.
8. Talk in terms of the other person’s interests.
9. Make the other person feel important - and do it sincerely.



Win People to Your Way of Thinking
10. The only way to get the best of an argument is to avoid it.
11. Show respect for the other person’s opinion. Never say, “You’re wrong.”
12. If you are wrong, admit it quickly and emphatically.
13. Begin in a friendly way.
14. Get the other person saying “yes, yes” immediately.
15. Let the other person do a great deal of the talking.
16. Let the other person feel that the idea is his or hers.
17. Try honestly to see things from the other person’s point of view.
18. Be sympathetic with the other person’s ideas and desires.
19. Appeal to the nobler motives.
20. Dramatize your ideas.
21. Throw down a challenge.



Be a Leader
22. Begin with praise and honest appreciation.
23. Call attention to people’s mistakes indirectly.
24. Talk about your own mistakes before criticizing the other person.
25. Ask questions instead of giving direct orders.
26. Let the other person save face.
27. Praise the slightest improvement and praise every improvement. Be “hearty in your
 approbation and lavish in your praise.”
28. Give the other person a fine reputation to live up to.
29. Use encouragement. Make the fault seem easy to correct.
30. Make the other person happy about doing the thing you suggest



Fundamental Principles for Overcoming Worry
1. Live in “day tight compartments.”
2. How to face trouble:
 a. Ask yourself, “What is the worst that can possibly happen?”
 b. Prepare to accept the worst.
 c. Try to improve on the worst.
3. Remind yourself of the exorbitant price you can pay for worry in terms of your health.



Basic Techniques in Analyzing Worry
1. Get all the facts.
2. Weigh all the facts — then come to a decision.
3. Once a decision is reached, act!
4. Write out and answer the following questions:
 a. What is the problem?
 b. What are the causes of the problem?
 c. What are the possible solutions?
 d. What is the best possible solution?



Break the Worry Habit Before It Breaks You
1. Keep busy.
2. Don’t fuss about trifles.
3. Use the law of averages to outlaw your worries.
4. Cooperate with the inevitable.
5. Decide just how much anxiety a thing may be worth and refuse to give it more.
6. Don’t worry about the past.



Cultivate a Mental Attitude that will Bring You Peace and Happiness
1. Fill your mind with thoughts of peace, courage, health and hope.
2. Never try to get even with your enemies.
3. Expect ingratitude.
4. Count your blessings — not your troubles.
5. Do not imitate others.
6. Try to profit from your losses.
7. Create happiness for others.

domenica 27 aprile 2014

Basic SQL Operation in R



I want to have in R the equivalent of most of the basic operations normally performed in SQL.
In this post it will follow a sniplet in SQL and immediately after the correspondent in R.

Topics Covered:
- Distinct
- Where
- Inner / outer joins
- Group by


Before starting with the Pure R syntax, just keep in mind that R is providing a very useful package called SQLDF. Through this package it is possible to perform a simple SQL query over tables / data frames.

 # installs everything you need to use sqldf with SQLite  
 # including SQLite itself  
 install.packages("sqldf")  
 # shows built in data frames  
 data()   
 # load sqldf into workspace  
 library(sqldf)  
 sqldf("select * from iris limit 5")  
 sqldf("select count(*) from iris")  
 sqldf("select Species, count(*) from iris group by Species")  
 # create a data frame  
 DF <- data.frame(a = 1:5, b = letters[1:5])  
 sqldf("select * from DF")  
 sqldf("select avg(a) mean, variance(a) var from DF") # see example 15  

Source: http://code.google.com/p/sqldf/



WHERE


 SELECT *   
 FROM df1   
 WHERE product = "Toaster"  


In R:
 df1 = data.frame(CustomerId=c(1:6),Product=c(rep("Toaster",3),rep("Radio",3))) ;  
 df <- df1[df1$Product=="Toaster",];  




DISTINCT

the select distinct in SQL:

 select distinct x  
 from my_table;  

The equivalent in R is:

 > x <- list(a=c(1,2,3), b = c(2,3,4), c=c(4,5,6))  
 > xx <- unlist(x)  
 > xx  
 a1 a2 a3 b1 b2 b3 c1 c2 c3   
  1 2 3 2 3 4 4 5 6   
 > unique(xx)  
 [1] 1 2 3 4 5 6  




INNER / OUTER JOINS

Having in SQL the following query:

 select *   
 from product [left] [right] [outer] join countries  
     on (product.customer_id = countries.customer_id)  


In R:
 df1 = data.frame(CustomerId=c(1:6),Product=c(rep("Toaster",3),rep("Radio",3)))  
 df2 = data.frame(CustomerId=c(2,4,6),State=c(rep("Alabama",2),rep("Ohio",1)))  
 > df1  
  CustomerId Product  
       1 Toaster  
       2 Toaster  
       3 Toaster  
       4  Radio  
       5  Radio  
       6  Radio  
 > df2  
  CustomerId  State  
       2 Alabama  
       4 Alabama  
       6  Ohio  
 #Outer join:   
 merge(x = df1, y = df2, by = "CustomerId", all = TRUE)  
 #Left outer:   
 merge(x = df1, y = df2, by = "CustomerId", all.x=TRUE)  
 #Right outer:   
 merge(x = df1, y = df2, by = "CustomerId", all.y=TRUE)  
 #Cross join:   
 merge(x = df1, y = df2, by = NULL)  

Source:
http://stackoverflow.com/questions/1299871/how-to-join-data-frames-in-r-inner-outer-left-right


GROUP BY


For the Group By function there are many options.
Let's start with the most basic one:

Having in SQL the following snipplet:
 CREATE TABLE my_table (  
  a varchar2(10 char),   
  b varchar2(10 char),   
  c number  
 );  
 SELECT a, b, mean(c)  
 FROM my_table  
 GROUP BY a, b  


In R:
 grouped_data <- aggregate(my_table, by=list(my_table$a, my_table$b, FUN=mean);  

Alternatively:
 > mydf  
  A B  
 1 1 2  
 2 1 3  
 3 2 3  
 4 3 5  
 5 3 6  
 > aggregate(B ~ A, mydf, sum)  
  A B  
 1 1 5  
 2 2 3  
 3 3 11  



If your data are large, I would also recommend looking into the "data.table" package.

  
 > library(data.table)  
 > DT <- data.table(mydf)  
 > DT[, sum(B), by = A]  
   A V1  
 1: 1 5  
 2: 2 3  
 3: 3 11  



And finally the most recommended ddply function:
 > DF <- data.frame(A = c("1", "1", "2", "3", "3"), B = c(2, 3, 3, 5, 6))  
 > library(plyr)  
 > DF.sum <- ddply(DF, c("A"), summarize, B = sum(B))  
 > DF.sum  
  A B  
 1 1 5  
 2 2 3  
 3 3 11  

Source:
http://stackoverflow.com/questions/18799901/data-frame-group-by-column

venerdì 25 aprile 2014

Boss Vs. Leader

I think it is a bit old, but I would like to have it stamped it on my blog...
I do not have so much time these days :/ this is the most I can do...



domenica 13 aprile 2014

ORACLE: Analytical Functions


The concept of analytical query is something that can highly speed up the development and the execution of your queries.
In particular because they are automatically optimized by oracle itself.

Here there are reported in a veeeeery small nutshell:


Count (member of elements in the same group)
SELECT empno, deptno, 
COUNT(*) OVER (PARTITION BY 
deptno) DEPT_COUNT
FROM emp
WHERE deptno IN (20, 30);

     EMPNO     DEPTNO DEPT_COUNT
---------- ---------- ----------
      7369         20          5
      7566         20          5
      7788         20          5
      7902         20          5
      7876         20          5
      7499         30          6
      7900         30          6
      7844         30          6
      7698         30          6
      7654         30          6
      7521         30          6

11 rows selected.



Row Number (id of the entry within the group)
SELECT empno, deptno, hiredate,
ROW_NUMBER( ) OVER (PARTITION BY deptno ORDER BY hiredate NULLS LAST) SRLNO
FROM emp
WHERE deptno IN (10, 20)
ORDER BY deptno, SRLNO;

EMPNO  DEPTNO HIREDATE       SRLNO
------ ------- --------- ----------
  7782      10 09-JUN-81          1
  7839      10 17-NOV-81          2
  7934      10 23-JAN-82          3
  7369      20 17-DEC-80          1
  7566      20 02-APR-81          2
  7902      20 03-DEC-81          3
  7788      20 09-DEC-82          4
  7876      20 12-JAN-83          5

8 rows selected.


Rank & Dense Rank (member of elements in the same group)
SELECT empno, deptno, sal,
RANK() OVER (PARTITION BY deptno ORDER BY sal DESC NULLS LAST) RANK,
DENSE_RANK() OVER (PARTITION BY deptno ORDER BY sal DESC NULLS LAST) DENSE_RANK
FROM emp
WHERE deptno IN (10, 20)
ORDER BY 2, RANK;

EMPNO  DEPTNO   SAL  RANK DENSE_RANK
------ ------- ----- ----- ----------
  7839      10  5000     1          1
  7782      10  2450     2          2
  7934      10  1300     3          3
  7788      20  3000     1          1
  7902      20  3000     1          1
  7566      20  2975     3          2
  7876      20  1100     4          3
  7369      20   800     5          4

8 rows selected.


Lead & Lag (next / previous member of the group respect the current element)
SELECT deptno, empno, sal,
LEAD(sal, 1, 0) OVER (PARTITION BY dept ORDER BY sal DESC NULLS LAST) NEXT_LOWER_SAL,
LAG(sal, 1, 0) OVER (PARTITION BY dept ORDER BY sal DESC NULLS LAST) PREV_HIGHER_SAL
FROM emp
WHERE deptno IN (10, 20)
ORDER BY deptno, sal DESC;

 DEPTNO  EMPNO   SAL NEXT_LOWER_SAL PREV_HIGHER_SAL
------- ------ ----- -------------- ---------------
     10   7839  5000           2450               0
     10   7782  2450           1300            5000
     10   7934  1300              0            2450
     20   7788  3000           3000               0
     20   7902  3000           2975            3000
     20   7566  2975           1100            3000
     20   7876  1100            800            2975
     20   7369   800              0            1100

8 rows selected.


First Value & Last Value
-- How many days after the first hire of each department were the next
-- employees hired?

SELECT empno, deptno, hiredate ? FIRST_VALUE(hiredate)
OVER (PARTITION BY deptno ORDER BY hiredate) DAY_GAP
FROM emp
WHERE deptno IN (20, 30)
ORDER BY deptno, DAY_GAP;

     EMPNO     DEPTNO    DAY_GAP
---------- ---------- ----------
      7369         20          0
      7566         20        106
      7902         20        351
      7788         20        722
      7876         20        756
      7499         30          0
      7521         30          2
      7698         30         70
      7844         30        200
      7654         30        220
      7900         30        286

11 rows selected.



Source:
http://www.orafaq.com/node/55



mercoledì 9 aprile 2014

File system access on Oracle




It may sound easy, but accessing the file system from oracle can be painful.
I am not talking about read / write a file. I am talking about making a ls or dir command, crete folders, move files, etc.
In this post I would like to recall an easy system about making ls.

Actually the solution is already very well explained in this web page:
http://plsqlexecoscomm.sourceforge.net/


The solution is mainly based on a java package installed in the Oracle DB, which is accessing the file system and arranging the data in a proper way.

First of all it is needed to install the package (available on the link above) and then perform a simple query like the one below:

select * 
from table(
    file_pkg.get_file_list(file_pkg.get_file('/'))
)

And here you are: you get the result of a ls command executed on the root accessible as a simple select.