{"trustable":true,"prependHtml":"\u003cscript\u003e\n window.katexOptions \u003d {\n delimiters: [\n {left: \u0027\\\\(\u0027, right: \u0027\\\\)\u0027, display: false},\n ]\n };\n\u003c/script\u003e\n","sections":[{"title":"","value":{"format":"HTML","content":"\u003cp\u003e\nInformation Theory is one of the most popular courses in Marjar University. In this course, there is an important chapter about information entropy.\n\u003c/p\u003e\n\n\u003cp\u003e\nEntropy is the average amount of information contained in each message received. Here, a message stands for an event, or a sample or a character drawn from a distribution or a data stream. Entropy thus characterizes our uncertainty about our source of information. The source is also characterized by the probability distribution of the samples drawn from it. The idea here is that the less likely an event is, the more information it provides when it occurs. \n\u003c/p\u003e\n\n\u003cp\u003e\nGenerally, \"entropy\" stands for \"disorder\" or uncertainty. The entropy we talk about here was introduced by Claude E. Shannon in his 1948 paper \"A Mathematical Theory of Communication\". We also call it Shannon entropy or information entropy to distinguish from other occurrences of the term, which appears in various parts of physics in different forms.\n\u003c/p\u003e\n\n\u003cp\u003e\nNamed after Boltzmann\u0027s H-theorem, Shannon defined the entropy Η (Greek letter Η, η) of a discrete random variable \u003cvar\u003eX\u003c/var\u003e with possible values \u003cvar\u003e{x\u003csub\u003e1\u003c/sub\u003e, x\u003csub\u003e2\u003c/sub\u003e, ..., x\u003csub\u003en\u003c/sub\u003e}\u003c/var\u003e and probability mass function \u003cvar\u003eP(X)\u003c/var\u003e as:\n\u003c/p\u003e\n\n\u003cscript type\u003d\"text/javascript\" src\u003d\"http://cdn.mathjax.org/mathjax/latest/MathJax.js?config\u003ddefault\"\u003e\u003c/script\u003e\n\n\u003cp\u003e\n$\\Large H(X)\u003dE(-\\ln(P(x)))$\n\u003c/p\u003e\n\n\u003cp\u003e\nHere E is the expected value operator. When taken from a finite sample, the entropy can explicitly be written as\n\u003c/p\u003e\n\n\u003cp\u003e\n$\\Large H(X)\u003d-\\sum_{i\u003d1}^{n}P(x_i)\\log_{~b}(P(x_i))$\n\u003c/p\u003e\n\n\u003cp\u003e\nWhere \u003cvar\u003eb\u003c/var\u003e is the base of the logarithm used. Common values of b are 2, Euler\u0027s number \u003cvar\u003ee\u003c/var\u003e, and 10. The unit of entropy is \u003cem\u003ebit\u003c/em\u003e for \u003cvar\u003eb\u003c/var\u003e \u003d 2, \u003cem\u003enat\u003c/em\u003e for \u003cvar\u003eb\u003c/var\u003e \u003d \u003cvar\u003ee\u003c/var\u003e, and \u003cem\u003edit\u003c/em\u003e (or digit) for \u003cvar\u003eb\u003c/var\u003e \u003d 10 respectively.\n\u003c/p\u003e\n\u003cp\u003e\nIn the case of \u003cvar\u003eP(x\u003csub\u003ei\u003c/sub\u003e)\u003c/var\u003e \u003d 0 for some \u003cvar\u003ei\u003c/var\u003e, the value of the corresponding summand 0 log\u003csub\u003eb\u003c/sub\u003e(0) is taken to be a well-known limit:\n\u003c/p\u003e\n$ \\Large 0 \\log_{~b}(0) \u003d \\lim_{p \\to 0 +} p \\log_{~b} (p)$\n\u003cp\u003e\nYour task is to calculate the entropy of a finite sample with \u003cvar\u003eN\u003c/var\u003e values.\n\u003c/p\u003e\n\n\u003ch4\u003eInput\u003c/h4\u003e\n\n\u003cp\u003e\nThere are multiple test cases. The first line of input contains an integer \u003cvar\u003eT\u003c/var\u003e indicating the number of test cases. For each test case:\n\u003c/p\u003e\n\n\u003cp\u003e\nThe first line contains an integer \u003cvar\u003eN\u003c/var\u003e (1 \u0026lt;\u003d \u003cvar\u003eN\u003c/var\u003e \u0026lt;\u003d 100) and a string \u003cvar\u003eS\u003c/var\u003e. The string \u003cvar\u003eS\u003c/var\u003e is one of \"bit\", \"nat\" or \"dit\", indicating the unit of entropy.\n\u003c/p\u003e\n\n\u003cp\u003e\nIn the next line, there are \u003cvar\u003eN\u003c/var\u003e non-negative integers \u003cvar\u003eP\u003csub\u003e1\u003c/sub\u003e\u003c/var\u003e, \u003cvar\u003eP\u003csub\u003e2\u003c/sub\u003e\u003c/var\u003e, .., \u003cvar\u003eP\u003csub\u003eN\u003c/sub\u003e\u003c/var\u003e. \u003cvar\u003eP\u003csub\u003ei\u003c/sub\u003e\u003c/var\u003e means the probability of the \u003cvar\u003ei\u003c/var\u003e-th value in percentage and the sum of \u003cvar\u003eP\u003csub\u003ei\u003c/sub\u003e\u003c/var\u003e will be 100.\n\u003c/p\u003e\n\n\u003ch4\u003eOutput\u003c/h4\u003e\n\n\u003cp\u003e\nFor each test case, output the entropy in the corresponding unit.\n\u003c/p\u003e\n\n\u003cp\u003e\nAny solution with a relative or absolute error of at most 10\u003csup\u003e-8\u003c/sup\u003e will be accepted.\n\u003c/p\u003e\n\n\u003ch4\u003eSample\u003c/h4\u003e\n\u003ctable class\u003d\"vjudge_sample\"\u003e\n\u003cthead\u003e\n \u003ctr\u003e\n \u003cth\u003eInput\u003c/th\u003e\n \u003cth\u003eOutput\u003c/th\u003e\n \u003c/tr\u003e\n\u003c/thead\u003e\n\u003ctbody\u003e\n \u003ctr\u003e\n \u003ctd\u003e\u003cpre\u003e3\n3 bit\n25 25 50\n7 nat\n1 2 4 8 16 32 37\n10 dit\n10 10 10 10 10 10 10 10 10 10\n\u003c/pre\u003e\u003c/td\u003e\n \u003ctd\u003e\u003cpre\u003e1.500000000000\n1.480810832465\n1.000000000000\n\u003c/pre\u003e\u003c/td\u003e\n \u003c/tr\u003e\n\u003c/tbody\u003e\n\u003c/table\u003e\n\n\n\u003c!--\n\u003ch4\u003eReference\u003c/h4\u003e\n\u003cp\u003e\n\u003ca href\u003d\"http://en.wikipedia.org/wiki/Entropy_(information_theory)\"\u003ehttp://en.wikipedia.org/wiki/Entropy_(information_theory)\u003c/a\u003e\n\u003c/p\u003e\n--\u003e\n"}}]}