diff -Nru haskell-hledger-lib-1.18.1/CHANGES.md haskell-hledger-lib-1.19.1/CHANGES.md --- haskell-hledger-lib-1.18.1/CHANGES.md 2020-06-21 17:22:53.000000000 +0000 +++ haskell-hledger-lib-1.19.1/CHANGES.md 2020-09-07 22:41:25.000000000 +0000 @@ -1,6 +1,104 @@ Internal/api/developer-ish changes in the hledger-lib (and hledger) packages. For user-visible changes, see the hledger package changelog. +# 1.19.1 2020-09-07 + +- Allow megaparsec 9 + +- stripAnsi: correctly strip ansi sequences with no + numbers/semicolons. (Stephen Morgan) + +- Added case-insensitive accountNameToAccountRegexCI, + accountNameToAccountOnlyRegexCI, made the default account type + queries case insensitive again. (#1341) + +# 1.19 2020-09-01 + +- Added a missing lower bound for aeson, making cabal installs more + reliable. (#1268) + +- The Regex type alias has been replaced by the Regexp ADT, which + contains both the compiled regular expression (so is guaranteed to + be usable at runtime) and the original string (so can be serialised, + printed, compared, etc.) A Regexp also knows whether is it case + sensitive or case insensitive. The Hledger.Utils.Regex API has + changed. (#1312, #1330). + +- Typeable and Data instances are no longer derived for hledger's + data types; they were redundant/no longer needed. + +- NFData instances are no longer derived for hledger's data types. + This speeds up a full build by roughly 7%. But it means we can't + deep-evaluate hledger values, or time hledger code with Criterion. + https://github.com/simonmichael/hledger/pull/1330#issuecomment-684075129 + has some ideas on this. + +- Query no longer has a custom Show instance + +- Hledger.Utils.String: quoteIfNeeded now actually escapes quotes in + strings. escapeQuotes was dropped. (Stephen Morgan) + +- Hledger.Utils.Tree: dropped some old utilities + +- Some fromIntegral calls have been replaced with safer code, removing + some potential for integer wrapping bugs (#1325, #1326) + +- Parsing numbers with more than 255 decimal places now gives an error + instead of silently misparsing (#1326) + +- Digit groups are now limited to at most 255 digits each. (#1326) + +- Exponents are parsed as Integer rather than Int. + This means exponents greater than 9223372036854775807 or less than + -9223372036854775808 are now parsed correctly, in theory. (In + practice, very large exponents will cause hledger to eat all your + memory, so avoid them for now.) (#1326) + +- AmountStyle's asprecision is now a sum type with Word8, instead of + an Int with magic values. + +- DigitGroupStyle uses Word8 instead of Int. + +- Partial helper function parsedate has been dropped, use fromGregorian instead. + +- Partial helper function mkdatespan has been dropped. + +- Helper function transaction now takes a Day instead of a date string. (Stephen Morgan) + +- Old CPP directives made redundant by version bounds have been + removed. (Stephen Morgan) + +- Smart dates are now represented by the SmartDate type, and are + always well formed. (Stephen Morgan) + +- accountTransactionsReport (used for hledger aregister and + hledger-ui/hledger-web registers) now filters transactions more + thoroughly, so eg transactions dated outside the report period will + not be shown. Previously the transaction would be shown if it had + any posting dated inside the report period. Possibly some other + filter criteria now get applied that didn't before. I think on + balance this will give slightly preferable results. + +- The old BalanceReport code has been dropped at last, replaced by + MultiBalanceReport so that all balance reports now use the same + code. (Stephen Morgan, #1256). + + - The large multiBalanceReport function has been split up and refactored + extensively. + - Tabular data formerly represented as [[MixedAmount]] is now HashMap + AccountName (Map DateSpan Account). Reports with many columns are now faster. + - Calculating starting balances no longer calls the whole balanceReport, + just the first few functions. + - displayedAccounts is completely rewritten. Perhaps one subtle thing to + note is that in tree mode it no longer excludes nodes with zero inclusive + balance unless they also have zero exclusive balance. + - Simon's note: "I'll mark the passing of the old multiBalanceReport, into + which I poured many an hour :). It is in a way the heart (brain ?) of + hledger - the key feature of ledgerlikes (balance report) and a key + improvement introduced by hledger (tabular multiperiod balance reports) + ... + Thanks @Xitian9, great work." + # 1.18.1 2020-06-21 - fix some doc typos (Martin Michlmayr) diff -Nru haskell-hledger-lib-1.18.1/debian/changelog haskell-hledger-lib-1.19.1/debian/changelog --- haskell-hledger-lib-1.18.1/debian/changelog 2020-10-29 20:33:15.000000000 +0000 +++ haskell-hledger-lib-1.19.1/debian/changelog 2021-09-20 19:38:16.000000000 +0000 @@ -1,8 +1,8 @@ -haskell-hledger-lib (1.18.1-1build1) hirsute; urgency=medium +haskell-hledger-lib (1.19.1-1) unstable; urgency=medium - * No-change rebuild for new GHC ABIs + * New upstream release - -- Steve Langasek Thu, 29 Oct 2020 20:33:15 +0000 + -- Clint Adams Mon, 20 Sep 2021 15:38:16 -0400 haskell-hledger-lib (1.18.1-1) unstable; urgency=medium diff -Nru haskell-hledger-lib-1.18.1/debian/control haskell-hledger-lib-1.19.1/debian/control --- haskell-hledger-lib-1.18.1/debian/control 2020-10-29 20:33:15.000000000 +0000 +++ haskell-hledger-lib-1.19.1/debian/control 2021-09-20 19:38:16.000000000 +0000 @@ -1,6 +1,5 @@ Source: haskell-hledger-lib -Maintainer: Ubuntu Developers -XSBC-Original-Maintainer: Debian Haskell Group +Maintainer: Debian Haskell Group Uploaders: Clint Adams , Priority: optional @@ -16,11 +15,11 @@ libghc-decimal-prof, libghc-glob-dev (>= 0.9), libghc-glob-prof, - libghc-aeson-dev, + libghc-aeson-dev (>= 1), libghc-aeson-prof, libghc-aeson-pretty-dev, libghc-aeson-pretty-prof, - libghc-ansi-terminal-dev (>= 0.6.2.3), + libghc-ansi-terminal-dev (>= 0.9), libghc-ansi-terminal-prof, libghc-base-compat-batteries-dev (>= 0.10.1), libghc-base-compat-batteries-dev (<< 0.12), @@ -46,7 +45,7 @@ libghc-hashtables-dev (>= 1.2.3.1), libghc-hashtables-prof, libghc-megaparsec-dev (>= 7.0.0), - libghc-megaparsec-dev (<< 8.1), + libghc-megaparsec-dev (<< 9.1), libghc-megaparsec-prof, libghc-old-time-dev, libghc-old-time-prof, @@ -70,6 +69,8 @@ libghc-timeit-prof, libghc-uglymemo-dev, libghc-uglymemo-prof, + libghc-unordered-containers-dev (>= 0.2), + libghc-unordered-containers-prof, libghc-utf8-string-dev (>= 0.3.5), libghc-utf8-string-prof, libghc-doctest-dev (>= 0.16) , @@ -102,6 +103,7 @@ libghc-tasty-hunit-doc, libghc-timeit-doc, libghc-uglymemo-doc, + libghc-unordered-containers-doc, libghc-utf8-string-doc, Standards-Version: 4.5.0 Homepage: https://hledger.org diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Account.hs haskell-hledger-lib-1.19.1/Hledger/Data/Account.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Account.hs 2020-01-28 17:23:35.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Account.hs 2020-09-01 17:33:33.000000000 +0000 @@ -9,18 +9,18 @@ module Hledger.Data.Account where -import Data.List +import Data.List (find, sortOn) import Data.List.Extra (groupSort, groupOn) -import Data.Maybe -import Data.Ord +import Data.Maybe (fromMaybe) +import Data.Ord (Down(..)) import qualified Data.Map as M -import Data.Text (pack,unpack) +import qualified Data.Text as T import Safe (headMay, lookupJustDef) import Text.Printf import Hledger.Data.AccountName import Hledger.Data.Amount -import Hledger.Data.Posting() +import Hledger.Data.Posting () import Hledger.Data.Types import Hledger.Utils @@ -28,11 +28,12 @@ -- deriving instance Show Account instance Show Account where show Account{..} = printf "Account %s (boring:%s, postings:%d, ebalance:%s, ibalance:%s)" - (pack $ regexReplace ":" "_" $ unpack aname) -- hide : so pretty-show doesn't break line + (T.map colonToUnderscore aname) -- hide : so pretty-show doesn't break line (if aboring then "y" else "n" :: String) anumpostings (showMixedAmount aebalance) (showMixedAmount aibalance) + where colonToUnderscore x = if x == ':' then '_' else x instance Eq Account where (==) a b = aname a == aname b -- quick equality test for speed @@ -134,11 +135,13 @@ -- | Remove subaccounts below the specified depth, aggregating their balance at the depth limit -- (accounts at the depth limit will have any sub-balances merged into their exclusive balance). -clipAccountsAndAggregate :: Int -> [Account] -> [Account] -clipAccountsAndAggregate d as = combined +-- If the depth is Nothing, return the original accounts +clipAccountsAndAggregate :: Maybe Int -> [Account] -> [Account] +clipAccountsAndAggregate Nothing as = as +clipAccountsAndAggregate (Just d) as = combined where - clipped = [a{aname=clipOrEllipsifyAccountName d $ aname a} | a <- as] - combined = [a{aebalance=sum (map aebalance same)} + clipped = [a{aname=clipOrEllipsifyAccountName (Just d) $ aname a} | a <- as] + combined = [a{aebalance=sum $ map aebalance same} | same@(a:_) <- groupOn aname clipped] {- test cases, assuming d=1: @@ -199,14 +202,11 @@ -- if balances are normally negative, then the most negative balances -- sort first, and vice versa. sortAccountTreeByAmount :: NormalSign -> Account -> Account -sortAccountTreeByAmount normalsign a - | null $ asubs a = a - | otherwise = a{asubs= - sortBy (maybeflip $ comparing (normaliseMixedAmountSquashPricesForDisplay . aibalance)) $ - map (sortAccountTreeByAmount normalsign) $ asubs a} +sortAccountTreeByAmount normalsign = mapAccounts $ \a -> a{asubs=sortSubs $ asubs a} where - maybeflip | normalsign==NormallyNegative = id - | otherwise = flip + sortSubs = case normalsign of + NormallyPositive -> sortOn (Down . normaliseMixedAmountSquashPricesForDisplay . aibalance) + NormallyNegative -> sortOn ( normaliseMixedAmountSquashPricesForDisplay . aibalance) -- | Add extra info for this account derived from the Journal's -- account directives, if any (comment, tags, declaration order..). diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/AccountName.hs haskell-hledger-lib-1.19.1/Hledger/Data/AccountName.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/AccountName.hs 2020-01-28 17:23:35.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/AccountName.hs 2020-09-03 23:42:32.000000000 +0000 @@ -16,9 +16,10 @@ ,accountNameFromComponents ,accountNameLevel ,accountNameToAccountOnlyRegex + ,accountNameToAccountOnlyRegexCI ,accountNameToAccountRegex + ,accountNameToAccountRegexCI ,accountNameTreeFrom - ,accountRegexToAccountName ,accountSummarisedName ,acctsep ,acctsepchar @@ -40,15 +41,14 @@ ) where -import Data.List import Data.List.Extra (nubSort) +import qualified Data.List.NonEmpty as NE #if !(MIN_VERSION_base(4,11,0)) -import Data.Monoid +import Data.Semigroup ((<>)) #endif import Data.Text (Text) import qualified Data.Text as T -import Data.Tree -import Text.Printf +import Data.Tree (Tree(..)) import Hledger.Data.Types import Hledger.Utils @@ -103,9 +103,11 @@ case accountNameDrop n $ T.drop (T.length unbudgetedAccountAndSep) a of "" -> unbudgetedAccountName a' -> unbudgetedAccountAndSep <> a' - | otherwise = accountNameFromComponents $ drop n $ accountNameComponents a + | otherwise = accountNameFromComponentsOrElide . drop n $ accountNameComponents a where unbudgetedAccountAndSep = unbudgetedAccountName <> acctsep + accountNameFromComponentsOrElide [] = "..." + accountNameFromComponentsOrElide xs = accountNameFromComponents xs -- | Sorted unique account names implied by these account names, -- ie these plus all their parent accounts up to the root. @@ -115,7 +117,7 @@ -- | "a:b:c" -> ["a","a:b","a:b:c"] expandAccountName :: AccountName -> [AccountName] -expandAccountName = map accountNameFromComponents . tail . inits . accountNameComponents +expandAccountName = map accountNameFromComponents . NE.tail . NE.inits . accountNameComponents -- | ["a:b:c","d:e"] -> ["a","d"] topAccountNames :: [AccountName] -> [AccountName] @@ -192,36 +194,45 @@ | otherwise = done++ss -- | Keep only the first n components of an account name, where n --- is a positive integer. If n is 0, returns the empty string. -clipAccountName :: Int -> AccountName -> AccountName -clipAccountName n = accountNameFromComponents . take n . accountNameComponents +-- is a positive integer. If n is Just 0, returns the empty string, if n is +-- Nothing, return the full name. +clipAccountName :: Maybe Int -> AccountName -> AccountName +clipAccountName Nothing = id +clipAccountName (Just n) = accountNameFromComponents . take n . accountNameComponents -- | Keep only the first n components of an account name, where n --- is a positive integer. If n is 0, returns "...". -clipOrEllipsifyAccountName :: Int -> AccountName -> AccountName -clipOrEllipsifyAccountName 0 = const "..." -clipOrEllipsifyAccountName n = accountNameFromComponents . take n . accountNameComponents +-- is a positive integer. If n is Just 0, returns "...", if n is Nothing, return +-- the full name. +clipOrEllipsifyAccountName :: Maybe Int -> AccountName -> AccountName +clipOrEllipsifyAccountName (Just 0) = const "..." +clipOrEllipsifyAccountName n = clipAccountName n -- | Escape an AccountName for use within a regular expression. -- >>> putStr $ escapeName "First?!#$*?$(*) !@^#*? %)*!@#" -- First\?!#\$\*\?\$\(\*\) !@\^#\*\? %\)\*!@# -escapeName :: AccountName -> Regexp -escapeName = regexReplaceBy "[[?+|()*\\\\^$]" ("\\" <>) - . T.unpack +escapeName :: AccountName -> String +escapeName = T.unpack . T.concatMap escapeChar + where + escapeChar c = if c `elem` escapedChars then T.snoc "\\" c else T.singleton c + escapedChars = ['[', '?', '+', '|', '(', ')', '*', '$', '^', '\\'] -- | Convert an account name to a regular expression matching it and its subaccounts. accountNameToAccountRegex :: AccountName -> Regexp -accountNameToAccountRegex "" = "" -accountNameToAccountRegex a = printf "^%s(:|$)" (escapeName a) +accountNameToAccountRegex a = toRegex' $ '^' : escapeName a ++ "(:|$)" -- PARTIAL: Is this safe after escapeName? + +-- | Convert an account name to a regular expression matching it and its subaccounts, +-- case insensitively. +accountNameToAccountRegexCI :: AccountName -> Regexp +accountNameToAccountRegexCI a = toRegexCI' $ '^' : escapeName a ++ "(:|$)" -- PARTIAL: Is this safe after escapeName? -- | Convert an account name to a regular expression matching it but not its subaccounts. accountNameToAccountOnlyRegex :: AccountName -> Regexp -accountNameToAccountOnlyRegex "" = "" -accountNameToAccountOnlyRegex a = printf "^%s$" $ escapeName a -- XXX pack +accountNameToAccountOnlyRegex a = toRegex' $ '^' : escapeName a ++ "$" -- PARTIAL: Is this safe after escapeName? --- | Convert an exact account-matching regular expression to a plain account name. -accountRegexToAccountName :: Regexp -> AccountName -accountRegexToAccountName = T.pack . regexReplace "^\\^(.*?)\\(:\\|\\$\\)$" "\\1" -- XXX pack +-- | Convert an account name to a regular expression matching it but not its subaccounts, +-- case insensitively. +accountNameToAccountOnlyRegexCI :: AccountName -> Regexp +accountNameToAccountOnlyRegexCI a = toRegexCI' $ '^' : escapeName a ++ "$" -- PARTIAL: Is this safe after escapeName? -- -- | Does this string look like an exact account-matching regular expression ? --isAccountRegex :: String -> Bool diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Amount.hs haskell-hledger-lib-1.19.1/Hledger/Data/Amount.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Amount.hs 2020-06-04 21:01:48.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Amount.hs 2020-09-06 23:37:24.000000000 +0000 @@ -40,10 +40,6 @@ -} --- Silence safe 0.3.18's deprecation warnings for (max|min)imum(By)?Def for now --- (may hide other deprecation warnings too). https://github.com/ndmitchell/safe/issues/26 -{-# OPTIONS_GHC -Wno-warnings-deprecations #-} - {-# LANGUAGE StandaloneDeriving, RecordWildCards, OverloadedStrings #-} module Hledger.Data.Amount ( @@ -78,13 +74,9 @@ showAmountWithZeroCommodity, showAmountDebug, showAmountWithoutPrice, - maxprecision, - maxprecisionwithpoint, setAmountPrecision, withPrecision, setFullPrecision, - setNaturalPrecision, - setNaturalPrecisionUpTo, setAmountInternalPrecision, withInternalPrecision, setAmountDecimalPoint, @@ -100,6 +92,7 @@ mapMixedAmount, normaliseMixedAmountSquashPricesForDisplay, normaliseMixedAmount, + unifyMixedAmount, mixedAmountStripPrices, -- ** arithmetic mixedAmountCost, @@ -120,8 +113,7 @@ showMixedAmountDebug, showMixedAmountWithoutPrice, showMixedAmountOneLineWithoutPrice, - cshowMixedAmountWithoutPrice, - cshowMixedAmountOneLineWithoutPrice, + showMixedAmountElided, showMixedAmountWithZeroCommodity, showMixedAmountWithPrecision, setMixedAmountPrecision, @@ -131,22 +123,23 @@ tests_Amount ) where +import Control.Monad (foldM) import Data.Char (isDigit) -import Data.Decimal (roundTo, decimalPlaces, normalizeDecimal) +import Data.Decimal (decimalPlaces, normalizeDecimal, roundTo) import Data.Function (on) import Data.List import qualified Data.Map as M import Data.Map (findWithDefault) import Data.Maybe import qualified Data.Text as T -import Safe (maximumDef) +import Data.Word (Word8) +import Safe (lastDef, maximumMay) import Text.Printf import Hledger.Data.Types import Hledger.Data.Commodity import Hledger.Utils - deriving instance Show MarketPrice @@ -154,7 +147,7 @@ -- Amount styles -- | Default amount style -amountstyle = AmountStyle L False 0 (Just '.') Nothing +amountstyle = AmountStyle L False (Precision 0) (Just '.') Nothing ------------------------------------------------------------------------------- @@ -181,11 +174,11 @@ -- Handy amount constructors for tests. -- usd/eur/gbp round their argument to a whole number of pennies/cents. num n = amount{acommodity="", aquantity=n} -hrs n = amount{acommodity="h", aquantity=n, astyle=amountstyle{asprecision=2, ascommodityside=R}} -usd n = amount{acommodity="$", aquantity=roundTo 2 n, astyle=amountstyle{asprecision=2}} -eur n = amount{acommodity="€", aquantity=roundTo 2 n, astyle=amountstyle{asprecision=2}} -gbp n = amount{acommodity="£", aquantity=roundTo 2 n, astyle=amountstyle{asprecision=2}} -per n = amount{acommodity="%", aquantity=n, astyle=amountstyle{asprecision=1, ascommodityside=R, ascommodityspaced=True}} +hrs n = amount{acommodity="h", aquantity=n, astyle=amountstyle{asprecision=Precision 2, ascommodityside=R}} +usd n = amount{acommodity="$", aquantity=roundTo 2 n, astyle=amountstyle{asprecision=Precision 2}} +eur n = amount{acommodity="€", aquantity=roundTo 2 n, astyle=amountstyle{asprecision=Precision 2}} +gbp n = amount{acommodity="£", aquantity=roundTo 2 n, astyle=amountstyle{asprecision=Precision 2}} +per n = amount{acommodity="%", aquantity=n, astyle=amountstyle{asprecision=Precision 1, ascommodityside=R, ascommodityspaced=True}} amt `at` priceamt = amt{aprice=Just $ UnitPrice priceamt} amt @@ priceamt = amt{aprice=Just $ TotalPrice priceamt} @@ -231,8 +224,13 @@ -- Does Decimal division, might be some rounding/irrational number issues. amountTotalPriceToUnitPrice :: Amount -> Amount amountTotalPriceToUnitPrice - a@Amount{aquantity=q, aprice=Just (TotalPrice pa@Amount{aquantity=pq, astyle=ps@AmountStyle{asprecision=pp}})} - = a{aprice = Just $ UnitPrice pa{aquantity=abs (pq/q), astyle=ps{asprecision=pp+1}}} + a@Amount{aquantity=q, aprice=Just (TotalPrice pa@Amount{aquantity=pq, astyle=ps})} + = a{aprice = Just $ UnitPrice pa{aquantity=abs (pq/q), astyle=ps{asprecision=pp}}} + where + -- Increase the precision by 1, capping at the max bound. + pp = case asprecision ps of + NaturalPrecision -> NaturalPrecision + Precision p -> Precision $ if p == maxBound then maxBound else p + 1 amountTotalPriceToUnitPrice a = a -- | Divide an amount's quantity by a constant. @@ -263,12 +261,17 @@ isNegativeAmount :: Amount -> Bool isNegativeAmount Amount{aquantity=q} = q < 0 -digits = "123456789" :: String +-- | Round an Amount's Quantity to its specified display precision. If that is +-- NaturalPrecision, this does nothing. +amountRoundedQuantity :: Amount -> Quantity +amountRoundedQuantity Amount{aquantity=q, astyle=AmountStyle{asprecision=p}} = case p of + NaturalPrecision -> q + Precision p' -> roundTo p' q -- | Does mixed amount appear to be zero when rendered with its -- display precision ? amountLooksZero :: Amount -> Bool -amountLooksZero = not . any (`elem` digits) . showAmountWithoutPriceOrCommodity +amountLooksZero = (0==) . amountRoundedQuantity -- | Is this amount exactly zero, ignoring its display precision ? amountIsZero :: Amount -> Bool @@ -276,43 +279,26 @@ -- | Get the string representation of an amount, based on its commodity's -- display settings except using the specified precision. -showAmountWithPrecision :: Int -> Amount -> String +showAmountWithPrecision :: AmountPrecision -> Amount -> String showAmountWithPrecision p = showAmount . setAmountPrecision p -- | Set an amount's display precision, flipped. -withPrecision :: Amount -> Int -> Amount +withPrecision :: Amount -> AmountPrecision -> Amount withPrecision = flip setAmountPrecision -- | Set an amount's display precision. -setAmountPrecision :: Int -> Amount -> Amount +setAmountPrecision :: AmountPrecision -> Amount -> Amount setAmountPrecision p a@Amount{astyle=s} = a{astyle=s{asprecision=p}} --- | Increase an amount's display precision, if needed, to enough --- decimal places to show it exactly (showing all significant decimal --- digits, excluding trailing zeros). +-- | Increase an amount's display precision, if needed, to enough decimal places +-- to show it exactly (showing all significant decimal digits, excluding trailing +-- zeros). setFullPrecision :: Amount -> Amount setFullPrecision a = setAmountPrecision p a where p = max displayprecision naturalprecision displayprecision = asprecision $ astyle a - naturalprecision = fromIntegral $ decimalPlaces $ normalizeDecimal $ aquantity a - --- | Set an amount's display precision to just enough decimal places --- to show it exactly (possibly less than the number specified by --- the amount's display style). -setNaturalPrecision :: Amount -> Amount -setNaturalPrecision a = setAmountPrecision normalprecision a - where - normalprecision = fromIntegral $ decimalPlaces $ normalizeDecimal $ aquantity a - --- | Set an amount's display precision to just enough decimal places --- to show it exactly (possibly less than the number specified by the --- amount's display style), but not more than the given maximum number --- of decimal digits. -setNaturalPrecisionUpTo :: Int -> Amount -> Amount -setNaturalPrecisionUpTo n a = setAmountPrecision (min n normalprecision) a - where - normalprecision = fromIntegral $ decimalPlaces $ normalizeDecimal $ aquantity a + naturalprecision = Precision . decimalPlaces . normalizeDecimal $ aquantity a -- | Get a string representation of an amount for debugging, -- appropriate to the current debug level. 9 shows maximum detail. @@ -321,8 +307,11 @@ showAmountDebug Amount{..} = printf "Amount {acommodity=%s, aquantity=%s, aprice=%s, astyle=%s}" (show acommodity) (show aquantity) (showAmountPriceDebug aprice) (show astyle) -- | Get the string representation of an amount, without any \@ price. -showAmountWithoutPrice :: Amount -> String -showAmountWithoutPrice a = showAmount a{aprice=Nothing} +-- With a True argument, adds ANSI codes to show negative amounts in red. +showAmountWithoutPrice :: Bool -> Amount -> String +showAmountWithoutPrice c a = showamt a{aprice=Nothing} + where + showamt = if c then cshowAmount else showAmount -- | Set an amount's internal precision, ie rounds the Decimal representing -- the amount's quantity to some number of decimal places. @@ -330,15 +319,15 @@ -- "If the value ends in 5 then it is rounded to the nearest even value (Banker's Rounding)". -- Does not change the amount's display precision. -- Intended only for internal use, eg when comparing amounts in tests. -setAmountInternalPrecision :: Int -> Amount -> Amount +setAmountInternalPrecision :: Word8 -> Amount -> Amount setAmountInternalPrecision p a@Amount{ aquantity=q, astyle=s } = a{ - astyle=s{asprecision=p} - ,aquantity=roundTo (fromIntegral p) q + astyle=s{asprecision=Precision p} + ,aquantity=roundTo p q } -- | Set an amount's internal precision, flipped. -- Intended only for internal use, eg when comparing amounts in tests. -withInternalPrecision :: Amount -> Int -> Amount +withInternalPrecision :: Amount -> Word8 -> Amount withInternalPrecision = flip setAmountInternalPrecision -- | Set (or clear) an amount's display decimal point. @@ -349,14 +338,6 @@ withDecimalPoint :: Amount -> Maybe Char -> Amount withDecimalPoint = flip setAmountDecimalPoint --- | Colour version. -cshowAmountWithoutPrice :: Amount -> String -cshowAmountWithoutPrice a = cshowAmount a{aprice=Nothing} - --- | Get the string representation of an amount, without any price or commodity symbol. -showAmountWithoutPriceOrCommodity :: Amount -> String -showAmountWithoutPriceOrCommodity a = showAmount a{acommodity="", aprice=Nothing} - showAmountPrice :: Maybe AmountPrice -> String showAmountPrice Nothing = "" showAmountPrice (Just (UnitPrice pa)) = " @ " ++ showAmount pa @@ -405,8 +386,7 @@ R -> printf "%s%s%s%s" quantity' space (T.unpack c') price where quantity = showamountquantity a - displayingzero = not (any (`elem` digits) quantity) - (quantity',c') | displayingzero && not showzerocommodity = ("0","") + (quantity',c') | amountLooksZero a && not showzerocommodity = ("0","") | otherwise = (quantity, quoteCommoditySymbolIfNeeded c) space = if not (T.null c') && ascommodityspaced then " " else "" :: String price = showAmountPrice mp @@ -418,14 +398,8 @@ -- | Get the string representation of the number part of of an amount, -- using the display settings from its commodity. showamountquantity :: Amount -> String -showamountquantity Amount{aquantity=q, astyle=AmountStyle{asprecision=p, asdecimalpoint=mdec, asdigitgroups=mgrps}} = - punctuatenumber (fromMaybe '.' mdec) mgrps qstr - where - -- isint n = fromIntegral (round n) == n - qstr -- p == maxprecision && isint q = printf "%d" (round q::Integer) - | p == maxprecisionwithpoint = show q - | p == maxprecision = chopdotzero $ show q - | otherwise = show $ roundTo (fromIntegral p) q +showamountquantity amt@Amount{astyle=AmountStyle{asdecimalpoint=mdec, asdigitgroups=mgrps}} = + punctuatenumber (fromMaybe '.' mdec) mgrps . show $ amountRoundedQuantity amt -- | Replace a number string's decimal mark with the specified -- character, and add the specified digit group marks. The last digit @@ -445,24 +419,12 @@ where addseps [] s = s addseps (g:gs) s - | length s <= g = s - | otherwise = let (part,rest) = splitAt g s - in part ++ [c] ++ addseps gs rest + | toInteger (length s) <= toInteger g = s + | otherwise = let (part,rest) = genericSplitAt g s + in part ++ c : addseps gs rest repeatLast [] = [] repeatLast gs = init gs ++ repeat (last gs) -chopdotzero str = reverse $ case reverse str of - '0':'.':s -> s - s -> s - --- | For rendering: a special precision value which means show all available digits. -maxprecision :: Int -maxprecision = 999998 - --- | For rendering: a special precision value which forces display of a decimal point. -maxprecisionwithpoint :: Int -maxprecisionwithpoint = 999999 - -- like journalCanonicaliseAmounts -- | Canonicalise an amount's display style using the provided commodity style map. canonicaliseAmount :: M.Map CommoditySymbol AmountStyle -> Amount -> Amount @@ -477,7 +439,7 @@ fromInteger i = Mixed [fromInteger i] negate (Mixed as) = Mixed $ map negate as (+) (Mixed as) (Mixed bs) = normaliseMixedAmount $ Mixed $ as ++ bs - (*) = error' "error, mixed amounts do not support multiplication" + (*) = error' "error, mixed amounts do not support multiplication" -- PARTIAL: abs = error' "error, mixed amounts do not support abs" signum = error' "error, mixed amounts do not support signum" @@ -514,9 +476,7 @@ | null nonzeros = Mixed [newzero] | otherwise = Mixed nonzeros where - newzero = case filter (/= "") (map acommodity zeros) of - _:_ -> last zeros - _ -> nullamt + newzero = lastDef nullamt $ filter (not . T.null . acommodity) zeros (zeros, nonzeros) = partition amountIsZero $ map sumSimilarAmountsUsingFirstPrice $ groupBy groupfn $ @@ -537,6 +497,19 @@ normaliseMixedAmountSquashPricesForDisplay :: MixedAmount -> MixedAmount normaliseMixedAmountSquashPricesForDisplay = normaliseHelper True +-- | Unify a MixedAmount to a single commodity value if possible. +-- Like normaliseMixedAmount, this consolidates amounts of the same commodity +-- and discards zero amounts; but this one insists on simplifying to +-- a single commodity, and will return Nothing if this is not possible. +unifyMixedAmount :: MixedAmount -> Maybe Amount +unifyMixedAmount = foldM combine 0 . amounts + where + combine amount result + | amountIsZero amount = Just result + | amountIsZero result = Just amount + | acommodity amount == acommodity result = Just $ amount + result + | otherwise = Nothing + -- | Sum same-commodity amounts in a lossy way, applying the first -- price to the result and discarding any other prices. Only used as a -- rendering helper. @@ -664,13 +637,13 @@ ltraceamount s = traceWith (((s ++ ": ") ++).showMixedAmount) -- | Set the display precision in the amount's commodities. -setMixedAmountPrecision :: Int -> MixedAmount -> MixedAmount +setMixedAmountPrecision :: AmountPrecision -> MixedAmount -> MixedAmount setMixedAmountPrecision p (Mixed as) = Mixed $ map (setAmountPrecision p) as -- | Get the string representation of a mixed amount, showing each of its -- component amounts with the specified precision, ignoring their -- commoditys' display precision settings. -showMixedAmountWithPrecision :: Int -> MixedAmount -> String +showMixedAmountWithPrecision :: AmountPrecision -> MixedAmount -> String showMixedAmountWithPrecision p m = vConcatRightAligned $ map (showAmountWithPrecision p) $ amounts $ normaliseMixedAmountSquashPricesForDisplay m @@ -683,43 +656,47 @@ -- TODO these and related fns are comically complicated: -- | Get the string representation of a mixed amount, without showing any transaction prices. -showMixedAmountWithoutPrice :: MixedAmount -> String -showMixedAmountWithoutPrice m = intercalate "\n" $ map showamt as - where - Mixed as = normaliseMixedAmountSquashPricesForDisplay $ mixedAmountStripPrices m - showamt = printf (printf "%%%ds" width) . showAmountWithoutPrice - where - width = maximumDef 0 $ map (length . showAmount) as - --- | Colour version of showMixedAmountWithoutPrice. Any individual Amount --- which is negative is wrapped in ANSI codes to make it display in red. -cshowMixedAmountWithoutPrice :: MixedAmount -> String -cshowMixedAmountWithoutPrice m = intercalate "\n" $ map showamt as +-- With a True argument, adds ANSI codes to show negative amounts in red. +showMixedAmountWithoutPrice :: Bool -> MixedAmount -> String +showMixedAmountWithoutPrice c m = intercalate "\n" $ map showamt as where Mixed as = normaliseMixedAmountSquashPricesForDisplay $ mixedAmountStripPrices m showamt a = - (if isNegativeAmount a then color Dull Red else id) $ - printf (printf "%%%ds" width) $ showAmountWithoutPrice a + (if c && isNegativeAmount a then color Dull Red else id) $ + printf (printf "%%%ds" width) $ showAmountWithoutPrice c a where - width = maximumDef 0 $ map (length . showAmount) as + width = fromMaybe 0 . maximumMay $ map (length . showAmount) as mixedAmountStripPrices :: MixedAmount -> MixedAmount mixedAmountStripPrices (Mixed as) = Mixed $ map (\a -> a{aprice=Nothing}) as -- | Get the one-line string representation of a mixed amount, but without -- any \@ prices. -showMixedAmountOneLineWithoutPrice :: MixedAmount -> String -showMixedAmountOneLineWithoutPrice m = intercalate ", " $ map showAmountWithoutPrice as +-- With a True argument, adds ANSI codes to show negative amounts in red. +showMixedAmountOneLineWithoutPrice :: Bool -> MixedAmount -> String +showMixedAmountOneLineWithoutPrice c m = + intercalate ", " $ map (showAmountWithoutPrice c) as where (Mixed as) = normaliseMixedAmountSquashPricesForDisplay $ stripPrices m stripPrices (Mixed as) = Mixed $ map stripprice as where stripprice a = a{aprice=Nothing} --- | Colour version. -cshowMixedAmountOneLineWithoutPrice :: MixedAmount -> String -cshowMixedAmountOneLineWithoutPrice m = intercalate ", " $ map cshowAmountWithoutPrice as - where - (Mixed as) = normaliseMixedAmountSquashPricesForDisplay $ stripPrices m - stripPrices (Mixed as) = Mixed $ map stripprice as where stripprice a = a{aprice=Nothing} +-- | Like showMixedAmountOneLineWithoutPrice, but show at most two commodities, +-- with a elision indicator if there are more. +-- With a True argument, adds ANSI codes to show negative amounts in red. +showMixedAmountElided :: Bool -> MixedAmount -> String +showMixedAmountElided c m = intercalate ", " $ take 2 astrs ++ elisionstr + where + astrs = map (showAmountWithoutPrice c) as + where + (Mixed as) = normaliseMixedAmountSquashPricesForDisplay $ stripPrices m + where + stripPrices (Mixed as) = Mixed $ map stripprice as + where + stripprice a = a{aprice=Nothing} + elisionstr | n > 2 = [show (n - 2) ++ " more.."] + | otherwise = [] + where + n = length astrs -- | Canonicalise a mixed amount's display styles using the provided commodity style map. canonicaliseMixedAmount :: M.Map CommoditySymbol AmountStyle -> MixedAmount -> MixedAmount @@ -758,8 +735,8 @@ (usd (-1.23) + usd (-1.23)) @?= usd (-2.46) sum [usd 1.23,usd (-1.23),usd (-1.23),-(usd (-1.23))] @?= usd 0 -- highest precision is preserved - asprecision (astyle $ sum [usd 1 `withPrecision` 1, usd 1 `withPrecision` 3]) @?= 3 - asprecision (astyle $ sum [usd 1 `withPrecision` 3, usd 1 `withPrecision` 1]) @?= 3 + asprecision (astyle $ sum [usd 1 `withPrecision` Precision 1, usd 1 `withPrecision` Precision 3]) @?= Precision 3 + asprecision (astyle $ sum [usd 1 `withPrecision` Precision 3, usd 1 `withPrecision` Precision 1]) @?= Precision 3 -- adding different commodities assumes conversion rate 1 assertBool "" $ amountLooksZero (usd 1.23 - eur 1.23) @@ -773,10 +750,10 @@ test "adding mixed amounts to zero, the commodity and amount style are preserved" $ sum (map (Mixed . (:[])) [usd 1.25 - ,usd (-1) `withPrecision` 3 + ,usd (-1) `withPrecision` Precision 3 ,usd (-0.25) ]) - @?= Mixed [usd 0 `withPrecision` 3] + @?= Mixed [usd 0 `withPrecision` Precision 3] ,test "adding mixed amounts with total prices" $ do sum (map (Mixed . (:[])) @@ -796,8 +773,8 @@ ,test "showMixedAmountWithoutPrice" $ do let a = usd 1 `at` eur 2 - showMixedAmountWithoutPrice (Mixed [a]) @?= "$1.00" - showMixedAmountWithoutPrice (Mixed [a, -a]) @?= "0" + showMixedAmountWithoutPrice False (Mixed [a]) @?= "$1.00" + showMixedAmountWithoutPrice False (Mixed [a, -a]) @?= "0" ,tests "normaliseMixedAmount" [ test "a missing amount overrides any other amounts" $ diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Commodity.hs haskell-hledger-lib-1.19.1/Hledger/Data/Commodity.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Commodity.hs 2020-01-28 17:23:35.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Commodity.hs 2020-08-29 21:29:10.000000000 +0000 @@ -12,6 +12,7 @@ module Hledger.Data.Commodity where +import Control.Applicative (liftA2) import Data.Char (isDigit) import Data.List import Data.Maybe (fromMaybe) @@ -26,16 +27,16 @@ -- characters that may not be used in a non-quoted commodity symbol -nonsimplecommoditychars = "0123456789-+.@*;\n \"{}=" :: String - isNonsimpleCommodityChar :: Char -> Bool -isNonsimpleCommodityChar c = isDigit c || c `textElem` otherChars - where - otherChars = "-+.@*;\n \"{}=" :: T.Text - textElem = T.any . (==) - -quoteCommoditySymbolIfNeeded s | T.any (isNonsimpleCommodityChar) s = "\"" <> s <> "\"" - | otherwise = s +isNonsimpleCommodityChar = liftA2 (||) isDigit isOther + where + otherChars = "-+.@*;\t\n \"{}=" :: T.Text + isOther c = T.any (==c) otherChars + +quoteCommoditySymbolIfNeeded :: T.Text -> T.Text +quoteCommoditySymbolIfNeeded s + | T.any isNonsimpleCommodityChar s = "\"" <> s <> "\"" + | otherwise = s commodity = "" @@ -58,7 +59,7 @@ -- | Look up one of the sample commodities' symbol by name. comm :: String -> CommoditySymbol comm name = snd $ fromMaybe - (error' "commodity lookup failed") + (error' "commodity lookup failed") -- PARTIAL: (find (\n -> fst n == name) commoditysymbols) -- | Find the conversion rate between two commodities. Currently returns 1. diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Dates.hs haskell-hledger-lib-1.19.1/Hledger/Data/Dates.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Dates.hs 2020-06-05 02:31:19.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Dates.hs 2020-09-02 03:10:45.000000000 +0000 @@ -1,4 +1,3 @@ -{-# LANGUAGE CPP #-} {-# LANGUAGE NoMonoLocalBinds #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE TypeFamilies #-} @@ -38,7 +37,6 @@ spanContainsDate, periodContainsDate, parsedateM, - parsedate, showDate, showDateSpan, showDateSpanMonthAbbrev, @@ -49,14 +47,14 @@ parsePeriodExpr', nulldatespan, emptydatespan, - failIfInvalidYear, - failIfInvalidMonth, - failIfInvalidDay, datesepchar, datesepchars, isDateSepChar, spanStart, spanEnd, + spanStartYear, + spanEndYear, + spanYears, spansSpan, spanIntersect, spansIntersect, @@ -71,37 +69,36 @@ fixSmartDateStr, fixSmartDateStrEither, fixSmartDateStrEither', + yearp, daysInSpan, maybePeriod, - mkdatespan, ) where import Prelude () import "base-compat-batteries" Prelude.Compat hiding (fail) import qualified "base-compat-batteries" Control.Monad.Fail.Compat as Fail (MonadFail, fail) +import Control.Applicative (liftA2) import Control.Applicative.Permutations import Control.Monad (guard, unless) import "base-compat-batteries" Data.List.Compat +import Data.Char (digitToInt, isDigit, ord) import Data.Default import Data.Foldable (asum) +import Data.Function (on) import Data.Maybe import qualified Data.Set as Set import Data.Text (Text) import qualified Data.Text as T -#if MIN_VERSION_time(1,5,0) import Data.Time.Format hiding (months) -#else -import Data.Time.Format -import System.Locale (TimeLocale, defaultTimeLocale) -#endif import Data.Time.Calendar import Data.Time.Calendar.OrdinalDate import Data.Time.Clock import Data.Time.LocalTime -import Safe (headMay, lastMay, readMay, maximumMay, minimumMay) +import Safe (headMay, lastMay, maximumMay, minimumMay) import Text.Megaparsec import Text.Megaparsec.Char +import Text.Megaparsec.Char.Lexer (decimal) import Text.Megaparsec.Custom import Text.Printf @@ -149,6 +146,16 @@ spanEnd :: DateSpan -> Maybe Day spanEnd (DateSpan _ d) = d +spanStartYear :: DateSpan -> Maybe Year +spanStartYear (DateSpan d _) = fmap (first3 . toGregorian) d + +spanEndYear :: DateSpan -> Maybe Year +spanEndYear (DateSpan d _) = fmap (first3 . toGregorian) d + +-- | Get the 0-2 years mentioned explicitly in a DateSpan. +spanYears :: DateSpan -> [Year] +spanYears (DateSpan ma mb) = mapMaybe (fmap (first3 . toGregorian)) [ma,mb] + -- might be useful later: http://en.wikipedia.org/wiki/Allen%27s_interval_algebra -- | Get overall span enclosing multiple sequentially ordered spans. @@ -163,34 +170,34 @@ -- -- -- ==== Examples: --- >>> let t i d1 d2 = splitSpan i $ mkdatespan d1 d2 --- >>> t NoInterval "2008/01/01" "2009/01/01" +-- >>> let t i y1 m1 d1 y2 m2 d2 = splitSpan i $ DateSpan (Just $ fromGregorian y1 m1 d1) (Just $ fromGregorian y2 m2 d2) +-- >>> t NoInterval 2008 01 01 2009 01 01 -- [DateSpan 2008] --- >>> t (Quarters 1) "2008/01/01" "2009/01/01" +-- >>> t (Quarters 1) 2008 01 01 2009 01 01 -- [DateSpan 2008Q1,DateSpan 2008Q2,DateSpan 2008Q3,DateSpan 2008Q4] -- >>> splitSpan (Quarters 1) nulldatespan -- [DateSpan ..] --- >>> t (Days 1) "2008/01/01" "2008/01/01" -- an empty datespan +-- >>> t (Days 1) 2008 01 01 2008 01 01 -- an empty datespan -- [] --- >>> t (Quarters 1) "2008/01/01" "2008/01/01" +-- >>> t (Quarters 1) 2008 01 01 2008 01 01 -- [] --- >>> t (Months 1) "2008/01/01" "2008/04/01" +-- >>> t (Months 1) 2008 01 01 2008 04 01 -- [DateSpan 2008-01,DateSpan 2008-02,DateSpan 2008-03] --- >>> t (Months 2) "2008/01/01" "2008/04/01" +-- >>> t (Months 2) 2008 01 01 2008 04 01 -- [DateSpan 2008-01-01..2008-02-29,DateSpan 2008-03-01..2008-04-30] --- >>> t (Weeks 1) "2008/01/01" "2008/01/15" +-- >>> t (Weeks 1) 2008 01 01 2008 01 15 -- [DateSpan 2007-12-31W01,DateSpan 2008-01-07W02,DateSpan 2008-01-14W03] --- >>> t (Weeks 2) "2008/01/01" "2008/01/15" +-- >>> t (Weeks 2) 2008 01 01 2008 01 15 -- [DateSpan 2007-12-31..2008-01-13,DateSpan 2008-01-14..2008-01-27] --- >>> t (DayOfMonth 2) "2008/01/01" "2008/04/01" +-- >>> t (DayOfMonth 2) 2008 01 01 2008 04 01 -- [DateSpan 2007-12-02..2008-01-01,DateSpan 2008-01-02..2008-02-01,DateSpan 2008-02-02..2008-03-01,DateSpan 2008-03-02..2008-04-01] --- >>> t (WeekdayOfMonth 2 4) "2011/01/01" "2011/02/15" +-- >>> t (WeekdayOfMonth 2 4) 2011 01 01 2011 02 15 -- [DateSpan 2010-12-09..2011-01-12,DateSpan 2011-01-13..2011-02-09,DateSpan 2011-02-10..2011-03-09] --- >>> t (DayOfWeek 2) "2011/01/01" "2011/01/15" +-- >>> t (DayOfWeek 2) 2011 01 01 2011 01 15 -- [DateSpan 2010-12-28..2011-01-03,DateSpan 2011-01-04..2011-01-10,DateSpan 2011-01-11..2011-01-17] --- >>> t (DayOfYear 11 29) "2011/10/01" "2011/10/15" +-- >>> t (DayOfYear 11 29) 2011 10 01 2011 10 15 -- [DateSpan 2010-11-29..2011-11-28] --- >>> t (DayOfYear 11 29) "2011/12/01" "2012/12/15" +-- >>> t (DayOfYear 11 29) 2011 12 01 2012 12 15 -- [DateSpan 2011-11-29..2012-11-28,DateSpan 2012-11-29..2013-11-28] -- splitSpan :: Interval -> DateSpan -> [DateSpan] @@ -226,7 +233,7 @@ | otherwise = DateSpan (Just subs) (Just sube) : splitspan' start next (DateSpan (Just sube) (Just e)) where subs = start s sube = next subs - splitspan' _ _ _ = error' "won't happen, avoids warnings" + splitspan' _ _ _ = error' "won't happen, avoids warnings" -- PARTIAL: -- | Count the days in a DateSpan, or if it is open-ended return Nothing. daysInSpan :: DateSpan -> Maybe Integer @@ -258,7 +265,7 @@ -- | Calculate the intersection of two datespans. -- -- For non-intersecting spans, gives an empty span beginning on the second's start date: --- >>> mkdatespan "2018-01-01" "2018-01-03" `spanIntersect` mkdatespan "2018-01-03" "2018-01-05" +-- >>> DateSpan (Just $ fromGregorian 2018 01 01) (Just $ fromGregorian 2018 01 03) `spanIntersect` DateSpan (Just $ fromGregorian 2018 01 03) (Just $ fromGregorian 2018 01 05) -- DateSpan 2018-01-03..2018-01-02 spanIntersect (DateSpan b1 e1) (DateSpan b2 e2) = DateSpan b e where @@ -329,7 +336,7 @@ -- | Like parsePeriodExpr, but call error' on failure. parsePeriodExpr' :: Day -> Text -> (Interval, DateSpan) parsePeriodExpr' refdate s = - either (error' . ("failed to parse:" ++) . customErrorBundlePretty) id $ + either (error' . ("failed to parse:" ++) . customErrorBundlePretty) id $ -- PARTIAL: parsePeriodExpr refdate s maybePeriod :: Day -> Text -> Maybe (Interval,DateSpan) @@ -355,30 +362,26 @@ (ry,rm,_) = toGregorian refdate (b,e) = span sdate span :: SmartDate -> (Day,Day) - span ("","","today") = (refdate, nextday refdate) - span ("","this","day") = (refdate, nextday refdate) - span ("","","yesterday") = (prevday refdate, refdate) - span ("","last","day") = (prevday refdate, refdate) - span ("","","tomorrow") = (nextday refdate, addDays 2 refdate) - span ("","next","day") = (nextday refdate, addDays 2 refdate) - span ("","last","week") = (prevweek refdate, thisweek refdate) - span ("","this","week") = (thisweek refdate, nextweek refdate) - span ("","next","week") = (nextweek refdate, startofweek $ addDays 14 refdate) - span ("","last","month") = (prevmonth refdate, thismonth refdate) - span ("","this","month") = (thismonth refdate, nextmonth refdate) - span ("","next","month") = (nextmonth refdate, startofmonth $ addGregorianMonthsClip 2 refdate) - span ("","last","quarter") = (prevquarter refdate, thisquarter refdate) - span ("","this","quarter") = (thisquarter refdate, nextquarter refdate) - span ("","next","quarter") = (nextquarter refdate, startofquarter $ addGregorianMonthsClip 6 refdate) - span ("","last","year") = (prevyear refdate, thisyear refdate) - span ("","this","year") = (thisyear refdate, nextyear refdate) - span ("","next","year") = (nextyear refdate, startofyear $ addGregorianYearsClip 2 refdate) - span ("","",d) = (day, nextday day) where day = fromGregorian ry rm (read d) - span ("",m,"") = (startofmonth day, nextmonth day) where day = fromGregorian ry (read m) 1 - span ("",m,d) = (day, nextday day) where day = fromGregorian ry (read m) (read d) - span (y,"","") = (startofyear day, nextyear day) where day = fromGregorian (read y) 1 1 - span (y,m,"") = (startofmonth day, nextmonth day) where day = fromGregorian (read y) (read m) 1 - span (y,m,d) = (day, nextday day) where day = fromGregorian (read y) (read m) (read d) + span (SmartRelative This Day) = (refdate, nextday refdate) + span (SmartRelative Last Day) = (prevday refdate, refdate) + span (SmartRelative Next Day) = (nextday refdate, addDays 2 refdate) + span (SmartRelative This Week) = (thisweek refdate, nextweek refdate) + span (SmartRelative Last Week) = (prevweek refdate, thisweek refdate) + span (SmartRelative Next Week) = (nextweek refdate, startofweek $ addDays 14 refdate) + span (SmartRelative This Month) = (thismonth refdate, nextmonth refdate) + span (SmartRelative Last Month) = (prevmonth refdate, thismonth refdate) + span (SmartRelative Next Month) = (nextmonth refdate, startofmonth $ addGregorianMonthsClip 2 refdate) + span (SmartRelative This Quarter) = (thisquarter refdate, nextquarter refdate) + span (SmartRelative Last Quarter) = (prevquarter refdate, thisquarter refdate) + span (SmartRelative Next Quarter) = (nextquarter refdate, startofquarter $ addGregorianMonthsClip 6 refdate) + span (SmartRelative This Year) = (thisyear refdate, nextyear refdate) + span (SmartRelative Last Year) = (prevyear refdate, thisyear refdate) + span (SmartRelative Next Year) = (nextyear refdate, startofyear $ addGregorianYearsClip 2 refdate) + span (SmartAssumeStart y Nothing) = (startofyear day, nextyear day) where day = fromGregorian y 1 1 + span (SmartAssumeStart y (Just (m, Nothing))) = (startofmonth day, nextmonth day) where day = fromGregorian y m 1 + span (SmartAssumeStart y (Just (m, Just d))) = (day, nextday day) where day = fromGregorian y m d + span (SmartFromReference m d) = (day, nextday day) where day = fromGregorian ry (fromMaybe rm m) d + span (SmartMonth m) = (startofmonth day, nextmonth day) where day = fromGregorian ry m 1 -- showDay :: Day -> String -- showDay day = printf "%04d/%02d/%02d" y m d where (y,m,d) = toGregorian day @@ -387,7 +390,7 @@ -- the provided reference date, or raise an error. fixSmartDateStr :: Day -> Text -> String fixSmartDateStr d s = - either (error' . printf "could not parse date %s %s" (show s) . show) id $ + either (error' . printf "could not parse date %s %s" (show s) . show) id $ -- PARTIAL: (fixSmartDateStrEither d s :: Either (ParseErrorBundle Text CustomErr) String) -- | A safe version of fixSmartDateStr. @@ -404,7 +407,7 @@ -- -- ==== Examples: -- >>> :set -XOverloadedStrings --- >>> let t = fixSmartDateStr (parsedate "2008/11/26") +-- >>> let t = fixSmartDateStr (fromGregorian 2008 11 26) -- >>> t "0000-01-01" -- "0000-01-01" -- >>> t "1999-12-02" @@ -477,31 +480,25 @@ fixSmartDate refdate = fix where fix :: SmartDate -> Day - fix ("", "", "today") = fromGregorian ry rm rd - fix ("", "this", "day") = fromGregorian ry rm rd - fix ("", "", "yesterday") = prevday refdate - fix ("", "last", "day") = prevday refdate - fix ("", "", "tomorrow") = nextday refdate - fix ("", "next", "day") = nextday refdate - fix ("", "last", "week") = prevweek refdate - fix ("", "this", "week") = thisweek refdate - fix ("", "next", "week") = nextweek refdate - fix ("", "last", "month") = prevmonth refdate - fix ("", "this", "month") = thismonth refdate - fix ("", "next", "month") = nextmonth refdate - fix ("", "last", "quarter") = prevquarter refdate - fix ("", "this", "quarter") = thisquarter refdate - fix ("", "next", "quarter") = nextquarter refdate - fix ("", "last", "year") = prevyear refdate - fix ("", "this", "year") = thisyear refdate - fix ("", "next", "year") = nextyear refdate - fix ("", "", d) = fromGregorian ry rm (read d) - fix ("", m, "") = fromGregorian ry (read m) 1 - fix ("", m, d) = fromGregorian ry (read m) (read d) - fix (y, "", "") = fromGregorian (read y) 1 1 - fix (y, m, "") = fromGregorian (read y) (read m) 1 - fix (y, m, d) = fromGregorian (read y) (read m) (read d) - (ry, rm, rd) = toGregorian refdate + fix (SmartRelative This Day) = refdate + fix (SmartRelative Last Day) = prevday refdate + fix (SmartRelative Next Day) = nextday refdate + fix (SmartRelative This Week) = thisweek refdate + fix (SmartRelative Last Week) = prevweek refdate + fix (SmartRelative Next Week) = nextweek refdate + fix (SmartRelative This Month) = thismonth refdate + fix (SmartRelative Last Month) = prevmonth refdate + fix (SmartRelative Next Month) = nextmonth refdate + fix (SmartRelative This Quarter) = thisquarter refdate + fix (SmartRelative Last Quarter) = prevquarter refdate + fix (SmartRelative Next Quarter) = nextquarter refdate + fix (SmartRelative This Year) = thisyear refdate + fix (SmartRelative Last Year) = prevyear refdate + fix (SmartRelative Next Year) = nextyear refdate + fix (SmartAssumeStart y md) = fromGregorian y (maybe 1 fst md) (fromMaybe 1 $ snd =<< md) + fix (SmartFromReference m d) = fromGregorian ry (fromMaybe rm m) d + fix (SmartMonth m) = fromGregorian ry m 1 + (ry, rm, _) = toGregorian refdate prevday :: Day -> Day prevday = addDays (-1) @@ -543,7 +540,7 @@ -- Examples: lets take 2017-11-22. Year-long intervals covering it that -- starts before Nov 22 will start in 2017. However -- intervals that start after Nov 23rd should start in 2016: --- >>> let wed22nd = parsedate "2017-11-22" +-- >>> let wed22nd = fromGregorian 2017 11 22 -- >>> nthdayofyearcontaining 11 21 wed22nd -- 2017-11-21 -- >>> nthdayofyearcontaining 11 22 wed22nd @@ -558,12 +555,13 @@ -- 2017-01-01 nthdayofyearcontaining :: Month -> MonthDay -> Day -> Day nthdayofyearcontaining m md date - | not (validMonth $ show m) = error' $ "nthdayofyearcontaining: invalid month "++show m - | not (validDay $ show md) = error' $ "nthdayofyearcontaining: invalid day " ++show md + -- PARTIAL: + | not (validMonth m) = error' $ "nthdayofyearcontaining: invalid month "++show m + | not (validDay md) = error' $ "nthdayofyearcontaining: invalid day " ++show md | mmddOfSameYear <= date = mmddOfSameYear | otherwise = mmddOfPrevYear - where mmddOfSameYear = addDays (fromIntegral md-1) $ applyN (m-1) nextmonth s - mmddOfPrevYear = addDays (fromIntegral md-1) $ applyN (m-1) nextmonth $ prevyear s + where mmddOfSameYear = addDays (toInteger md-1) $ applyN (m-1) nextmonth s + mmddOfPrevYear = addDays (toInteger md-1) $ applyN (m-1) nextmonth $ prevyear s s = startofyear date -- | For given date d find month-long interval that starts on nth day of month @@ -573,7 +571,7 @@ -- Examples: lets take 2017-11-22. Month-long intervals covering it that -- start on 1st-22nd of month will start in Nov. However -- intervals that start on 23rd-30th of month should start in Oct: --- >>> let wed22nd = parsedate "2017-11-22" +-- >>> let wed22nd = fromGregorian 2017 11 22 -- >>> nthdayofmonthcontaining 1 wed22nd -- 2017-11-01 -- >>> nthdayofmonthcontaining 12 wed22nd @@ -586,7 +584,8 @@ -- 2017-10-30 nthdayofmonthcontaining :: MonthDay -> Day -> Day nthdayofmonthcontaining md date - | not (validDay $ show md) = error' $ "nthdayofmonthcontaining: invalid day " ++show md + -- PARTIAL: + | not (validDay md) = error' $ "nthdayofmonthcontaining: invalid day " ++show md | nthOfSameMonth <= date = nthOfSameMonth | otherwise = nthOfPrevMonth where nthOfSameMonth = nthdayofmonth md s @@ -599,7 +598,7 @@ -- Examples: 2017-11-22 is Wed. Week-long intervals that cover it and -- start on Mon, Tue or Wed will start in the same week. However -- intervals that start on Thu or Fri should start in prev week: --- >>> let wed22nd = parsedate "2017-11-22" +-- >>> let wed22nd = fromGregorian 2017 11 22 -- >>> nthdayofweekcontaining 1 wed22nd -- 2017-11-20 -- >>> nthdayofweekcontaining 2 wed22nd @@ -613,8 +612,8 @@ nthdayofweekcontaining :: WeekDay -> Day -> Day nthdayofweekcontaining n d | nthOfSameWeek <= d = nthOfSameWeek | otherwise = nthOfPrevWeek - where nthOfSameWeek = addDays (fromIntegral n-1) s - nthOfPrevWeek = addDays (fromIntegral n-1) $ prevweek s + where nthOfSameWeek = addDays (toInteger n-1) s + nthOfPrevWeek = addDays (toInteger n-1) $ prevweek s s = startofweek d -- | For given date d find month-long interval that starts on nth weekday of month @@ -623,7 +622,7 @@ -- Examples: 2017-11-22 is 3rd Wed of Nov. Month-long intervals that cover it and -- start on 1st-4th Wed will start in Nov. However -- intervals that start on 4th Thu or Fri or later should start in Oct: --- >>> let wed22nd = parsedate "2017-11-22" +-- >>> let wed22nd = fromGregorian 2017 11 22 -- >>> nthweekdayofmonthcontaining 1 3 wed22nd -- 2017-11-01 -- >>> nthweekdayofmonthcontaining 3 2 wed22nd @@ -641,14 +640,16 @@ nthWeekdayPrevMonth = advancetonthweekday n wd $ prevmonth d -- | Advance to nth weekday wd after given start day s +-- Can call error. advancetonthweekday :: Int -> WeekDay -> Day -> Day advancetonthweekday n wd s = + -- PARTIAL: maybe err (addWeeks (n-1)) $ firstMatch (>=s) $ iterate (addWeeks 1) $ firstweekday s where err = error' "advancetonthweekday: should not happen" - addWeeks k = addDays (7 * fromIntegral k) + addWeeks k = addDays (7 * toInteger k) firstMatch p = headMay . dropWhile (not . p) - firstweekday = addDays (fromIntegral wd-1) . startofweek + firstweekday = addDays (toInteger wd-1) . startofweek ---------------------------------------------------------------------- -- parsing @@ -660,55 +661,22 @@ -- parseTime defaultTimeLocale "%Y-%m-%d %H:%M:%S" s -- ] -parsetime :: ParseTime t => TimeLocale -> String -> String -> Maybe t -parsetime = -#if MIN_VERSION_time(1,5,0) - parseTimeM True -#else - parseTime -#endif - - -- | Try to parse a couple of date string formats: -- `YYYY-MM-DD`, `YYYY/MM/DD` or `YYYY.MM.DD`, with leading zeros required. -- For internal use, not quite the same as the journal's "simple dates". +-- >>> parsedateM "2008/02/03" +-- Just 2008-02-03 +-- >>> parsedateM "2008/02/03/" +-- Nothing +-- >>> parsedateM "2008/02/30" +-- Nothing parsedateM :: String -> Maybe Day parsedateM s = asum [ - parsetime defaultTimeLocale "%Y-%m-%d" s, - parsetime defaultTimeLocale "%Y/%m/%d" s, - parsetime defaultTimeLocale "%Y.%m.%d" s + parseTimeM True defaultTimeLocale "%Y-%m-%d" s, + parseTimeM True defaultTimeLocale "%Y/%m/%d" s, + parseTimeM True defaultTimeLocale "%Y.%m.%d" s ] - --- -- | Parse a date-time string to a time type, or raise an error. --- parsedatetime :: String -> LocalTime --- parsedatetime s = fromMaybe (error' $ "could not parse timestamp \"" ++ s ++ "\"") --- (parsedatetimeM s) - --- | Like parsedateM, raising an error on parse failure. --- --- >>> parsedate "2008/02/03" --- 2008-02-03 -parsedate :: String -> Day -parsedate s = fromMaybe (error' $ "could not parse date \"" ++ s ++ "\"") - (parsedateM s) --- doctests I haven't been able to make compatible with both GHC 7 and 8 --- -- >>> parsedate "2008/02/03/" --- -- *** Exception: could not parse date "2008/02/03/" --- #if MIN_VERSION_base(4,9,0) --- -- ... --- #endif --- #if MIN_VERSION_time(1,6,0) --- -- >>> parsedate "2008/02/30" -- with time >= 1.6, invalid dates are rejected --- -- *** Exception: could not parse date "2008/02/30" --- #if MIN_VERSION_base(4,9,0) --- -- ... --- #endif --- #else --- -- >>> parsedate "2008/02/30" -- with time < 1.6, they are silently adjusted --- -- 2008-02-29 --- #endif - {-| Parse a date in any of the formats allowed in Ledger's period expressions, and some others. Assumes any text in the parse stream has been lowercased. @@ -739,15 +707,15 @@ YYYYMMDD is parsed as year-month-date if those parts are valid (>=4 digits, 1-12, and 1-31 respectively): >>> parsewith (smartdate <* eof) "20181201" -Right ("2018","12","01") +Right (SmartAssumeStart 2018 (Just (12,Just 1))) YYYYMM is parsed as year-month-01 if year and month are valid: >>> parsewith (smartdate <* eof) "201804" -Right ("2018","04","01") +Right (SmartAssumeStart 2018 (Just (4,Nothing))) With an invalid month, it's parsed as a year: >>> parsewith (smartdate <* eof) "201813" -Right ("201813","","") +Right (SmartAssumeStart 201813 Nothing) A 9+ digit number beginning with valid YYYYMMDD gives an error: >>> parsewith (smartdate <* eof) "201801012" @@ -755,22 +723,29 @@ Big numbers not beginning with a valid YYYYMMDD are parsed as a year: >>> parsewith (smartdate <* eof) "201813012" -Right ("201813012","","") +Right (SmartAssumeStart 201813012 Nothing) -} smartdate :: TextParser m SmartDate -smartdate = do +smartdate = choice' -- XXX maybe obscures date errors ? see ledgerdate - (y,m,d) <- choice' [yyyymmdd, yyyymm, ymd, ym, md, y, d, month, mon, today, yesterday, tomorrow, lastthisnextthing] - return (y,m,d) + [ yyyymmdd, ymd + , (\(m,d) -> SmartFromReference (Just m) d) <$> md + , (SmartFromReference Nothing <$> decimal) >>= failIfInvalidDate + , SmartMonth <$> (month <|> mon) + , SmartRelative This Day <$ string' "today" + , SmartRelative Last Day <$ string' "yesterday" + , SmartRelative Next Day <$ string' "tomorrow" + , liftA2 SmartRelative (seqP <* skipNonNewlineSpaces) intervalP + ] + where + seqP = choice [This <$ string' "this", Last <$ string' "last", Next <$ string' "next"] + intervalP = choice [Day <$ string' "day", Week <$ string' "week", Month <$ string' "month", + Quarter <$ string' "quarter", Year <$ string' "year"] -- | Like smartdate, but there must be nothing other than whitespace after the date. smartdateonly :: TextParser m SmartDate -smartdateonly = do - d <- smartdate - skipMany spacenonewline - eof - return d +smartdateonly = smartdate <* skipNonNewlineSpaces <* eof datesepchars :: String datesepchars = "/-." @@ -781,73 +756,47 @@ isDateSepChar :: Char -> Bool isDateSepChar c = c == '-' || c == '/' || c == '.' -validYear, validMonth, validDay :: String -> Bool -validYear s = length s >= 4 && isJust (readMay s :: Maybe Year) -validMonth s = maybe False (\n -> n>=1 && n<=12) $ readMay s -validDay s = maybe False (\n -> n>=1 && n<=31) $ readMay s - -failIfInvalidYear, failIfInvalidMonth, failIfInvalidDay :: (Fail.MonadFail m) => String -> m () -failIfInvalidYear s = unless (validYear s) $ Fail.fail $ "bad year number: " ++ s -failIfInvalidMonth s = unless (validMonth s) $ Fail.fail $ "bad month number: " ++ s -failIfInvalidDay s = unless (validDay s) $ Fail.fail $ "bad day number: " ++ s +validMonth, validDay :: Int -> Bool +validMonth n = n >= 1 && n <= 12 +validDay n = n >= 1 && n <= 31 + +failIfInvalidDate :: Fail.MonadFail m => SmartDate -> m SmartDate +failIfInvalidDate s = unless isValid (Fail.fail $ "bad smart date: " ++ show s) *> return s + where isValid = case s of + SmartAssumeStart y (Just (m, md)) -> isJust $ fromGregorianValid y m (fromMaybe 1 md) + SmartFromReference mm d -> isJust $ fromGregorianValid 2004 (fromMaybe 1 mm) d + SmartMonth m -> validMonth m + _ -> True yyyymmdd :: TextParser m SmartDate yyyymmdd = do - y <- count 4 digitChar - m <- count 2 digitChar - failIfInvalidMonth m - d <- count 2 digitChar - failIfInvalidDay d - return (y,m,d) - -yyyymm :: TextParser m SmartDate -yyyymm = do - y <- count 4 digitChar - m <- count 2 digitChar - failIfInvalidMonth m - return (y,m,"01") + y <- read <$> count 4 digitChar + m <- read <$> count 2 digitChar + d <- optional $ read <$> count 2 digitChar + let date = SmartAssumeStart y $ Just (m, d) + failIfInvalidDate date ymd :: TextParser m SmartDate -ymd = do - y <- some digitChar - failIfInvalidYear y - sep <- datesepchar - m <- some digitChar - failIfInvalidMonth m - char sep - d <- some digitChar - failIfInvalidDay d - return $ (y,m,d) - -ym :: TextParser m SmartDate -ym = do - y <- some digitChar - failIfInvalidYear y - datesepchar - m <- some digitChar - failIfInvalidMonth m - return (y,m,"") - -y :: TextParser m SmartDate -y = do - y <- some digitChar - failIfInvalidYear y - return (y,"","") - -d :: TextParser m SmartDate -d = do - d <- some digitChar - failIfInvalidDay d - return ("","",d) +ymd = liftA2 SmartAssumeStart yearp (optional $ try monthday) >>= failIfInvalidDate + where monthday = do + sep <- datesepchar + liftA2 (,) decimal . optional $ char sep *> decimal -md :: TextParser m SmartDate +md :: TextParser m (Month, MonthDay) md = do - m <- some digitChar - failIfInvalidMonth m + m <- decimal datesepchar - d <- some digitChar - failIfInvalidDay d - return ("",m,d) + d <- decimal + _ <- failIfInvalidDate $ SmartFromReference (Just m) d + return (m, d) + +-- | Parse a year number from a Text, making sure that at least four digits are +-- used. +yearp :: TextParser m Integer +yearp = do + year <- takeWhile1P (Just "year") isDigit + unless (T.length year >= 4) . Fail.fail $ "Year must contain at least 4 digits: " <> T.unpack year + return $ readDecimal year -- These are compared case insensitively, and should all be kept lower case. months = ["january","february","march","april","may","june", @@ -856,23 +805,9 @@ weekdays = ["monday","tuesday","wednesday","thursday","friday","saturday","sunday"] weekdayabbrevs = ["mon","tue","wed","thu","fri","sat","sun"] --- | Convert a case insensitive english month name to a month number. -monthIndex name = maybe 0 (+1) $ T.toLower name `elemIndex` months - --- | Convert a case insensitive english three-letter month abbreviation to a month number. -monIndex name = maybe 0 (+1) $ T.toLower name `elemIndex` monthabbrevs - -month :: TextParser m SmartDate -month = do - m <- choice $ map (try . string') months - let i = monthIndex m - return ("",show i,"") - -mon :: TextParser m SmartDate -mon = do - m <- choice $ map (try . string') monthabbrevs - let i = monIndex m - return ("",show i,"") +month, mon :: TextParser m Month +month = choice $ zipWith (\i m -> i <$ string' m) [1..12] months +mon = choice $ zipWith (\i m -> i <$ string' m) [1..12] monthabbrevs weekday :: TextParser m Int weekday = do @@ -882,41 +817,20 @@ [] -> Fail.fail $ "weekday: should not happen: attempted to find " <> show wday <> " in " <> show (weekdays ++ weekdayabbrevs) -today,yesterday,tomorrow :: TextParser m SmartDate -today = string' "today" >> return ("","","today") -yesterday = string' "yesterday" >> return ("","","yesterday") -tomorrow = string' "tomorrow" >> return ("","","tomorrow") - -lastthisnextthing :: TextParser m SmartDate -lastthisnextthing = do - r <- choice $ map string' [ - "last" - ,"this" - ,"next" - ] - skipMany spacenonewline -- make the space optional for easier scripting - p <- choice $ map string' [ - "day" - ,"week" - ,"month" - ,"quarter" - ,"year" - ] --- XXX support these in fixSmartDate --- ++ (map string' $ months ++ monthabbrevs ++ weekdays ++ weekdayabbrevs) - - return ("", T.unpack r, T.unpack p) - -- | Parse a period expression, specifying a date span and optionally -- a reporting interval. Requires a reference "today" date for -- resolving any relative start/end dates (only; it is not needed for -- parsing the reporting interval). -- --- >>> let p = parsePeriodExpr (parsedate "2008-11-26") +-- >>> let p = parsePeriodExpr (fromGregorian 2008 11 26) -- >>> p "from Aug to Oct" -- Right (NoInterval,DateSpan 2008-08-01..2008-09-30) -- >>> p "aug to oct" -- Right (NoInterval,DateSpan 2008-08-01..2008-09-30) +-- >>> p "2009q2" +-- Right (NoInterval,DateSpan 2009Q2) +-- >>> p "Q3" +-- Right (NoInterval,DateSpan 2008Q3) -- >>> p "every 3 days in Aug" -- Right (Days 3,DateSpan 2008-08) -- >>> p "daily from aug" @@ -959,152 +873,117 @@ -- Right (DayOfMonth 2,DateSpan 2009-01-01..) periodexprp :: Day -> TextParser m (Interval, DateSpan) periodexprp rdate = do - skipMany spacenonewline - choice $ map try [ - intervalanddateperiodexprp rdate, - (,) NoInterval <$> periodexprdatespanp rdate - ] + skipNonNewlineSpaces + choice' [ intervalanddateperiodexprp rdate + , (,) NoInterval <$> periodexprdatespanp rdate + ] -- Parse a reporting interval and a date span. intervalanddateperiodexprp :: Day -> TextParser m (Interval, DateSpan) intervalanddateperiodexprp rdate = do i <- reportingintervalp s <- option def . try $ do - skipMany spacenonewline + skipNonNewlineSpaces periodexprdatespanp rdate return (i,s) -- Parse a reporting interval. reportingintervalp :: TextParser m Interval -reportingintervalp = choice' [ - tryinterval "day" "daily" Days, - tryinterval "week" "weekly" Weeks, - tryinterval "month" "monthly" Months, - tryinterval "quarter" "quarterly" Quarters, - tryinterval "year" "yearly" Years, - do string' "biweekly" - return $ Weeks 2, - do string' "bimonthly" - return $ Months 2, - do string' "every" - skipMany spacenonewline - n <- nth - skipMany spacenonewline - string' "day" - of_ "week" - return $ DayOfWeek n, - do string' "every" - skipMany spacenonewline - DayOfWeek <$> weekday, - do string' "every" - skipMany spacenonewline - n <- nth - skipMany spacenonewline - string' "day" - optOf_ "month" - return $ DayOfMonth n, - do string' "every" - let mnth = choice' [month, mon] >>= \(_,m,_) -> return (read m) - d_o_y <- runPermutation $ - DayOfYear <$> toPermutation (try (skipMany spacenonewline *> mnth)) - <*> toPermutation (try (skipMany spacenonewline *> nth)) - optOf_ "year" - return d_o_y, - do string' "every" - skipMany spacenonewline - ("",m,d) <- md - optOf_ "year" - return $ DayOfYear (read m) (read d), - do string' "every" - skipMany spacenonewline - n <- nth - skipMany spacenonewline - wd <- weekday - optOf_ "month" - return $ WeekdayOfMonth n wd - ] - where - of_ period = do - skipMany spacenonewline - string' "of" - skipMany spacenonewline - string' period - - optOf_ period = optional $ try $ of_ period - - nth = do n <- some digitChar - choice' $ map string' ["st","nd","rd","th"] - return $ read n - - -- Parse any of several variants of a basic interval, eg "daily", "every day", "every N days". - tryinterval :: String -> String -> (Int -> Interval) -> TextParser m Interval - tryinterval singular compact intcons = - choice' [ - do string' compact' - return $ intcons 1, - do string' "every" - skipMany spacenonewline - string' singular' - return $ intcons 1, - do string' "every" - skipMany spacenonewline - n <- read <$> some digitChar - skipMany spacenonewline - string' plural' - return $ intcons n - ] - where - compact' = T.pack compact - singular' = T.pack singular - plural' = T.pack $ singular ++ "s" +reportingintervalp = choice' + [ tryinterval "day" "daily" Days + , tryinterval "week" "weekly" Weeks + , tryinterval "month" "monthly" Months + , tryinterval "quarter" "quarterly" Quarters + , tryinterval "year" "yearly" Years + , Weeks 2 <$ string' "biweekly" + , Weeks 2 <$ string' "fortnightly" + , Months 2 <$ string' "bimonthly" + , string' "every" *> skipNonNewlineSpaces *> choice' + [ DayOfWeek <$> (nth <* skipNonNewlineSpaces <* string' "day" <* of_ "week") + , DayOfMonth <$> (nth <* skipNonNewlineSpaces <* string' "day" <* optOf_ "month") + , liftA2 WeekdayOfMonth nth $ skipNonNewlineSpaces *> weekday <* optOf_ "month" + , uncurry DayOfYear <$> (md <* optOf_ "year") + , DayOfWeek <$> weekday + , d_o_y <* optOf_ "year" + ] + ] + where + of_ period = + skipNonNewlineSpaces *> string' "of" *> skipNonNewlineSpaces *> string' period + + optOf_ period = optional . try $ of_ period + + nth = decimal <* choice (map string' ["st","nd","rd","th"]) + d_o_y = runPermutation $ liftA2 DayOfYear (toPermutation $ (month <|> mon) <* skipNonNewlineSpaces) + (toPermutation $ nth <* skipNonNewlineSpaces) + + -- Parse any of several variants of a basic interval, eg "daily", "every day", "every N days". + tryinterval :: String -> String -> (Int -> Interval) -> TextParser m Interval + tryinterval singular compact intcons = intcons <$> choice' + [ 1 <$ string' compact' + , string' "every" *> skipNonNewlineSpaces *> choice + [ 1 <$ string' singular' + , decimal <* skipNonNewlineSpaces <* string' plural' + ] + ] + where + compact' = T.pack compact + singular' = T.pack singular + plural' = T.pack $ singular ++ "s" periodexprdatespanp :: Day -> TextParser m DateSpan periodexprdatespanp rdate = choice $ map try [ doubledatespanp rdate, + quarterdatespanp rdate, fromdatespanp rdate, todatespanp rdate, justdatespanp rdate ] -- | --- -- >>> parsewith (doubledatespan (parsedate "2018/01/01") <* eof) "20180101-201804" --- Right DateSpan 2018-01-01..2018-04-01 +-- >>> parsewith (doubledatespanp (fromGregorian 2018 01 01) <* eof) "20180101-201804" +-- Right DateSpan 2018Q1 doubledatespanp :: Day -> TextParser m DateSpan -doubledatespanp rdate = do - optional (string' "from" >> skipMany spacenonewline) - b <- smartdate - skipMany spacenonewline - optional (choice [string' "to", string "..", string' "-"] >> skipMany spacenonewline) - DateSpan (Just $ fixSmartDate rdate b) . Just . fixSmartDate rdate <$> smartdate +doubledatespanp rdate = liftA2 fromToSpan + (optional (string' "from" *> skipNonNewlineSpaces) *> smartdate) + (skipNonNewlineSpaces *> choice [string' "to", string "..", string "-"] + *> skipNonNewlineSpaces *> smartdate) + where + fromToSpan = DateSpan `on` (Just . fixSmartDate rdate) + +-- | +-- >>> parsewith (quarterdatespanp (fromGregorian 2018 01 01) <* eof) "q1" +-- Right DateSpan 2018Q1 +-- >>> parsewith (quarterdatespanp (fromGregorian 2018 01 01) <* eof) "Q1" +-- Right DateSpan 2018Q1 +-- >>> parsewith (quarterdatespanp (fromGregorian 2018 01 01) <* eof) "2020q4" +-- Right DateSpan 2020Q4 +quarterdatespanp :: Day -> TextParser m DateSpan +quarterdatespanp rdate = do + y <- yearp <|> pure (first3 $ toGregorian rdate) + q <- char' 'q' *> satisfy is4Digit + return . periodAsDateSpan $ QuarterPeriod y (digitToInt q) + where + is4Digit c = (fromIntegral (ord c - ord '1') :: Word) <= 3 fromdatespanp :: Day -> TextParser m DateSpan -fromdatespanp rdate = do - b <- choice [ - do - string' "from" >> skipMany spacenonewline - smartdate - , - do - d <- smartdate - choice [string "..", string' "-"] - return d +fromdatespanp rdate = fromSpan <$> choice + [ string' "from" *> skipNonNewlineSpaces *> smartdate + , smartdate <* choice [string "..", string "-"] ] - return $ DateSpan (Just $ fixSmartDate rdate b) Nothing + where + fromSpan b = DateSpan (Just $ fixSmartDate rdate b) Nothing todatespanp :: Day -> TextParser m DateSpan -todatespanp rdate = do - choice [string' "to", string' "until", string "..", string' "-"] >> skipMany spacenonewline - DateSpan Nothing . Just . fixSmartDate rdate <$> smartdate +todatespanp rdate = + choice [string' "to", string' "until", string "..", string "-"] + *> skipNonNewlineSpaces + *> (DateSpan Nothing . Just . fixSmartDate rdate <$> smartdate) justdatespanp :: Day -> TextParser m DateSpan -justdatespanp rdate = do - optional (string' "in" >> skipMany spacenonewline) - spanFromSmartDate rdate <$> smartdate - --- | Make a datespan from two valid date strings parseable by parsedate --- (or raise an error). Eg: mkdatespan \"2011/1/1\" \"2011/12/31\". -mkdatespan :: String -> String -> DateSpan -mkdatespan b = DateSpan (Just $ parsedate b) . Just . parsedate +justdatespanp rdate = + optional (string' "in" *> skipNonNewlineSpaces) + *> (spanFromSmartDate rdate <$> smartdate) nulldatespan :: DateSpan nulldatespan = DateSpan Nothing Nothing diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Journal.hs haskell-hledger-lib-1.19.1/Hledger/Data/Journal.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Journal.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Journal.hs 2020-09-03 23:42:32.000000000 +0000 @@ -74,7 +74,6 @@ journalCashAccountQuery, -- * Misc canonicalStyleFrom, - matchpats, nulljournal, journalCheckBalanceAssertions, journalNumberAndTieTransactions, @@ -296,67 +295,55 @@ -- queries for standard account types --- | Get a query for accounts of a certain type (Asset, Liability..) in this journal. --- The query will match all accounts which were declared as that type by account directives, --- plus all their subaccounts which have not been declared as a different type. --- If no accounts were declared as this type, the query will instead match accounts --- with names matched by the provided case-insensitive regular expression. -journalAccountTypeQuery :: AccountType -> Regexp -> Journal -> Query -journalAccountTypeQuery atype fallbackregex j = - case M.lookup atype (jdeclaredaccounttypes j) of - Nothing -> Acct fallbackregex - Just as -> - -- XXX Query isn't able to match account type since that requires extra info from the journal. - -- So we do a hacky search by name instead. - And [ - Or $ map (Acct . accountNameToAccountRegex) as - ,Not $ Or $ map (Acct . accountNameToAccountRegex) differentlytypedsubs - ] - where - differentlytypedsubs = concat - [subs | (t,bs) <- M.toList (jdeclaredaccounttypes j) - , t /= atype - , let subs = [b | b <- bs, any (`isAccountNamePrefixOf` b) as] - ] - -- | A query for accounts in this journal which have been --- declared as Asset by account directives, or otherwise for --- accounts with names matched by the case-insensitive regular expression --- @^assets?(:|$)@. +-- declared as Asset (or Cash, a subtype of Asset) by account directives, +-- or otherwise for accounts with names matched by the case-insensitive +-- regular expression @^assets?(:|$)@. journalAssetAccountQuery :: Journal -> Query -journalAssetAccountQuery = journalAccountTypeQuery Asset "^assets?(:|$)" +journalAssetAccountQuery = journalAccountTypeQuery [Asset,Cash] (toRegexCI' "^assets?(:|$)") + +-- | A query for "Cash" (liquid asset) accounts in this journal, ie accounts +-- declared as Cash by account directives, or otherwise with names matched by the +-- case-insensitive regular expression @^assets?(:|$)@. and not including +-- the case-insensitive regular expression @(investment|receivable|:A/R|:fixed)@. +journalCashAccountQuery :: Journal -> Query +journalCashAccountQuery j = + case M.lookup Cash (jdeclaredaccounttypes j) of + Nothing -> And [ journalAssetAccountQuery j, Not . Acct $ toRegexCI' "(investment|receivable|:A/R|:fixed)" ] + Just _ -> journalAccountTypeQuery [Cash] notused j + where notused = error' "journalCashAccountQuery: this should not have happened!" -- PARTIAL: -- | A query for accounts in this journal which have been -- declared as Liability by account directives, or otherwise for -- accounts with names matched by the case-insensitive regular expression -- @^(debts?|liabilit(y|ies))(:|$)@. journalLiabilityAccountQuery :: Journal -> Query -journalLiabilityAccountQuery = journalAccountTypeQuery Liability "^(debts?|liabilit(y|ies))(:|$)" +journalLiabilityAccountQuery = journalAccountTypeQuery [Liability] (toRegexCI' "^(debts?|liabilit(y|ies))(:|$)") -- | A query for accounts in this journal which have been -- declared as Equity by account directives, or otherwise for -- accounts with names matched by the case-insensitive regular expression -- @^equity(:|$)@. journalEquityAccountQuery :: Journal -> Query -journalEquityAccountQuery = journalAccountTypeQuery Equity "^equity(:|$)" +journalEquityAccountQuery = journalAccountTypeQuery [Equity] (toRegexCI' "^equity(:|$)") -- | A query for accounts in this journal which have been -- declared as Revenue by account directives, or otherwise for -- accounts with names matched by the case-insensitive regular expression -- @^(income|revenue)s?(:|$)@. journalRevenueAccountQuery :: Journal -> Query -journalRevenueAccountQuery = journalAccountTypeQuery Revenue "^(income|revenue)s?(:|$)" +journalRevenueAccountQuery = journalAccountTypeQuery [Revenue] (toRegexCI' "^(income|revenue)s?(:|$)") -- | A query for accounts in this journal which have been -- declared as Expense by account directives, or otherwise for -- accounts with names matched by the case-insensitive regular expression -- @^expenses?(:|$)@. journalExpenseAccountQuery :: Journal -> Query -journalExpenseAccountQuery = journalAccountTypeQuery Expense "^expenses?(:|$)" +journalExpenseAccountQuery = journalAccountTypeQuery [Expense] (toRegexCI' "^expenses?(:|$)") -- | A query for Asset, Liability & Equity accounts in this journal. -- Cf . -journalBalanceSheetAccountQuery :: Journal -> Query +journalBalanceSheetAccountQuery :: Journal -> Query journalBalanceSheetAccountQuery j = Or [journalAssetAccountQuery j ,journalLiabilityAccountQuery j ,journalEquityAccountQuery j @@ -369,12 +356,32 @@ ,journalExpenseAccountQuery j ] --- | A query for Cash (-equivalent) accounts in this journal (ie, --- accounts which appear on the cashflow statement.) This is currently --- hard-coded to be all the Asset accounts except for those with names --- containing the case-insensitive regular expression @(receivable|:A/R|:fixed)@. -journalCashAccountQuery :: Journal -> Query -journalCashAccountQuery j = And [journalAssetAccountQuery j, Not $ Acct "(receivable|:A/R|:fixed)"] +-- | Get a query for accounts of the specified types (Asset, Liability..) in this journal. +-- The query will match all accounts which were declared as one of +-- these types by account directives, plus all their subaccounts which +-- have not been declared as some other type. +-- Or if no accounts were declared with these types, the query will +-- instead match accounts with names matched by the provided +-- case-insensitive regular expression. +journalAccountTypeQuery :: [AccountType] -> Regexp -> Journal -> Query +journalAccountTypeQuery atypes fallbackregex Journal{jdeclaredaccounttypes} = + let + declaredacctsoftype :: [AccountName] = + concat $ mapMaybe (`M.lookup` jdeclaredaccounttypes) atypes + in case declaredacctsoftype of + [] -> Acct fallbackregex + as -> And [ Or acctnameRegexes, Not $ Or differentlyTypedRegexes ] + where + -- XXX Query isn't able to match account type since that requires extra info from the journal. + -- So we do a hacky search by name instead. + acctnameRegexes = map (Acct . accountNameToAccountRegex) as + differentlyTypedRegexes = map (Acct . accountNameToAccountRegex) differentlytypedsubs + + differentlytypedsubs = concat + [subs | (t,bs) <- M.toList jdeclaredaccounttypes + , not $ t `elem` atypes + , let subs = [b | b <- bs, any (`isAccountNamePrefixOf` b) as] + ] -- Various kinds of filtering on journals. We do it differently depending -- on the command. @@ -579,10 +586,15 @@ journalUntieTransactions :: Transaction -> Transaction journalUntieTransactions t@Transaction{tpostings=ps} = t{tpostings=map (\p -> p{ptransaction=Nothing}) ps} --- | Apply any transaction modifier rules in the journal --- (adding automated postings to transactions, eg). -journalModifyTransactions :: Journal -> Journal -journalModifyTransactions j = j{ jtxns = modifyTransactions (jtxnmodifiers j) (jtxns j) } +-- | Apply any transaction modifier rules in the journal (adding automated +-- postings to transactions, eg). Or if a modifier rule fails to parse, +-- return the error message. A reference date is provided to help interpret +-- relative dates in transaction modifier queries. +journalModifyTransactions :: Day -> Journal -> Either String Journal +journalModifyTransactions d j = + case modifyTransactions d (jtxnmodifiers j) (jtxns j) of + Right ts -> Right j{jtxns=ts} + Left err -> Left err -- | Check any balance assertions in the journal and return an error message -- if any of them fail (or if the transaction balancing they require fails). @@ -700,7 +712,7 @@ runST $ do -- We'll update a mutable array of transactions as we balance them, -- not strictly necessary but avoids a sort at the end I think. - balancedtxns <- newListArray (1, genericLength ts) ts + balancedtxns <- newListArray (1, toInteger $ length ts) ts -- Infer missing posting amounts, check transactions are balanced, -- and check balance assertions. This is done in two passes: @@ -969,7 +981,7 @@ journalInferCommodityStyles j = case commodityStylesFromAmounts $ - dbg8 "journalInferCommodityStyles using amounts" $ + dbg7 "journalInferCommodityStyles using amounts" $ journalStyleInfluencingAmounts j of Left e -> Left e @@ -1221,25 +1233,6 @@ -- ) -- ] --- Misc helpers - --- | Check if a set of hledger account/description filter patterns matches the --- given account name or entry description. Patterns are case-insensitive --- regular expressions. Prefixed with not:, they become anti-patterns. -matchpats :: [String] -> String -> Bool -matchpats pats str = - (null positives || any match positives) && (null negatives || not (any match negatives)) - where - (negatives,positives) = partition isnegativepat pats - match "" = True - match pat = regexMatchesCI (abspat pat) str - -negateprefix = "not:" - -isnegativepat = (negateprefix `isPrefixOf`) - -abspat pat = if isnegativepat pat then drop (length negateprefix) pat else pat - -- debug helpers -- traceAmountPrecision a = trace (show $ map (precision . acommodity) $ amounts a) a -- tracePostingsCommodities ps = trace (show $ map ((map (precision . acommodity) . amounts) . pamount) ps) ps @@ -1279,7 +1272,7 @@ txnTieKnot $ Transaction { tindex=0, tsourcepos=nullsourcepos, - tdate=parsedate "2008/01/01", + tdate=fromGregorian 2008 01 01, tdate2=Nothing, tstatus=Unmarked, tcode="", @@ -1296,7 +1289,7 @@ txnTieKnot $ Transaction { tindex=0, tsourcepos=nullsourcepos, - tdate=parsedate "2008/06/01", + tdate=fromGregorian 2008 06 01, tdate2=Nothing, tstatus=Unmarked, tcode="", @@ -1313,7 +1306,7 @@ txnTieKnot $ Transaction { tindex=0, tsourcepos=nullsourcepos, - tdate=parsedate "2008/06/02", + tdate=fromGregorian 2008 06 02, tdate2=Nothing, tstatus=Unmarked, tcode="", @@ -1330,7 +1323,7 @@ txnTieKnot $ Transaction { tindex=0, tsourcepos=nullsourcepos, - tdate=parsedate "2008/06/03", + tdate=fromGregorian 2008 06 03, tdate2=Nothing, tstatus=Cleared, tcode="", @@ -1347,7 +1340,7 @@ txnTieKnot $ Transaction { tindex=0, tsourcepos=nullsourcepos, - tdate=parsedate "2008/10/01", + tdate=fromGregorian 2008 10 01, tdate2=Nothing, tstatus=Unmarked, tcode="", @@ -1363,7 +1356,7 @@ txnTieKnot $ Transaction { tindex=0, tsourcepos=nullsourcepos, - tdate=parsedate "2008/12/31", + tdate=fromGregorian 2008 12 31, tdate2=Nothing, tstatus=Unmarked, tcode="", @@ -1382,11 +1375,11 @@ test "journalDateSpan" $ journalDateSpan True nulljournal{ - jtxns = [nulltransaction{tdate = parsedate "2014/02/01" - ,tpostings = [posting{pdate=Just (parsedate "2014/01/10")}] + jtxns = [nulltransaction{tdate = fromGregorian 2014 02 01 + ,tpostings = [posting{pdate=Just (fromGregorian 2014 01 10)}] } - ,nulltransaction{tdate = parsedate "2014/09/01" - ,tpostings = [posting{pdate2=Just (parsedate "2014/10/10")}] + ,nulltransaction{tdate = fromGregorian 2014 09 01 + ,tpostings = [posting{pdate2=Just (fromGregorian 2014 10 10)}] } ] } @@ -1399,11 +1392,18 @@ journalAccountNamesMatching q = filter (q `matchesAccount`) . journalAccountNames namesfrom qfunc = journalAccountNamesMatching (qfunc j) j in [ - test "assets" $ assertEqual "" (namesfrom journalAssetAccountQuery) ["assets","assets:bank","assets:bank:checking","assets:bank:saving","assets:cash"] - ,test "liabilities" $ assertEqual "" (namesfrom journalLiabilityAccountQuery) ["liabilities","liabilities:debts"] - ,test "equity" $ assertEqual "" (namesfrom journalEquityAccountQuery) [] - ,test "income" $ assertEqual "" (namesfrom journalRevenueAccountQuery) ["income","income:gifts","income:salary"] - ,test "expenses" $ assertEqual "" (namesfrom journalExpenseAccountQuery) ["expenses","expenses:food","expenses:supplies"] + test "assets" $ assertEqual "" ["assets","assets:bank","assets:bank:checking","assets:bank:saving","assets:cash"] + (namesfrom journalAssetAccountQuery) + ,test "cash" $ assertEqual "" ["assets","assets:bank","assets:bank:checking","assets:bank:saving","assets:cash"] + (namesfrom journalCashAccountQuery) + ,test "liabilities" $ assertEqual "" ["liabilities","liabilities:debts"] + (namesfrom journalLiabilityAccountQuery) + ,test "equity" $ assertEqual "" [] + (namesfrom journalEquityAccountQuery) + ,test "income" $ assertEqual "" ["income","income:gifts","income:salary"] + (namesfrom journalRevenueAccountQuery) + ,test "expenses" $ assertEqual "" ["expenses","expenses:food","expenses:supplies"] + (namesfrom journalExpenseAccountQuery) ] ,tests "journalBalanceTransactions" [ @@ -1413,7 +1413,7 @@ --2019/01/01 -- (a) = 1 nulljournal{ jtxns = [ - transaction "2019/01/01" [ vpost' "a" missingamt (balassert (num 1)) ] + transaction (fromGregorian 2019 01 01) [ vpost' "a" missingamt (balassert (num 1)) ] ]} assertRight ej let Right j = ej @@ -1426,8 +1426,8 @@ --2019/01/01 -- (a) 1 = 2 nulljournal{ jtxns = [ - transaction "2019/01/01" [ vpost' "a" missingamt (balassert (num 1)) ] - ,transaction "2019/01/01" [ vpost' "a" (num 1) (balassert (num 2)) ] + transaction (fromGregorian 2019 01 01) [ vpost' "a" missingamt (balassert (num 1)) ] + ,transaction (fromGregorian 2019 01 01) [ vpost' "a" (num 1) (balassert (num 2)) ] ]} ,test "same-day-2" $ do @@ -1440,12 +1440,12 @@ --2019/01/01 -- a 0 = 1 nulljournal{ jtxns = [ - transaction "2019/01/01" [ vpost' "a" (num 2) (balassert (num 2)) ] - ,transaction "2019/01/01" [ + transaction (fromGregorian 2019 01 01) [ vpost' "a" (num 2) (balassert (num 2)) ] + ,transaction (fromGregorian 2019 01 01) [ post' "b" (num 1) Nothing ,post' "a" missingamt Nothing ] - ,transaction "2019/01/01" [ post' "a" (num 0) (balassert (num 1)) ] + ,transaction (fromGregorian 2019 01 01) [ post' "a" (num 0) (balassert (num 1)) ] ]} ,test "out-of-order" $ do @@ -1455,8 +1455,8 @@ --2019/1/1 -- (a) 1 = 1 nulljournal{ jtxns = [ - transaction "2019/01/02" [ vpost' "a" (num 1) (balassert (num 2)) ] - ,transaction "2019/01/01" [ vpost' "a" (num 1) (balassert (num 1)) ] + transaction (fromGregorian 2019 01 02) [ vpost' "a" (num 1) (balassert (num 2)) ] + ,transaction (fromGregorian 2019 01 01) [ vpost' "a" (num 1) (balassert (num 1)) ] ]} ] @@ -1472,26 +1472,26 @@ -- test "1091a" $ do commodityStylesFromAmounts [ - nullamt{aquantity=1000, astyle=AmountStyle L False 3 (Just ',') Nothing} - ,nullamt{aquantity=1000, astyle=AmountStyle L False 2 (Just '.') (Just (DigitGroups ',' [3]))} + nullamt{aquantity=1000, astyle=AmountStyle L False (Precision 3) (Just ',') Nothing} + ,nullamt{aquantity=1000, astyle=AmountStyle L False (Precision 2) (Just '.') (Just (DigitGroups ',' [3]))} ] @?= -- The commodity style should have period as decimal mark -- and comma as digit group mark. Right (M.fromList [ - ("", AmountStyle L False 3 (Just '.') (Just (DigitGroups ',' [3]))) + ("", AmountStyle L False (Precision 3) (Just '.') (Just (DigitGroups ',' [3]))) ]) -- same journal, entries in reverse order ,test "1091b" $ do commodityStylesFromAmounts [ - nullamt{aquantity=1000, astyle=AmountStyle L False 2 (Just '.') (Just (DigitGroups ',' [3]))} - ,nullamt{aquantity=1000, astyle=AmountStyle L False 3 (Just ',') Nothing} + nullamt{aquantity=1000, astyle=AmountStyle L False (Precision 2) (Just '.') (Just (DigitGroups ',' [3]))} + ,nullamt{aquantity=1000, astyle=AmountStyle L False (Precision 3) (Just ',') Nothing} ] @?= -- The commodity style should have period as decimal mark -- and comma as digit group mark. Right (M.fromList [ - ("", AmountStyle L False 3 (Just '.') (Just (DigitGroups ',' [3]))) + ("", AmountStyle L False (Precision 3) (Just '.') (Just (DigitGroups ',' [3]))) ]) ] diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Json.hs haskell-hledger-lib-1.19.1/Hledger/Data/Json.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Json.hs 2020-06-06 22:33:04.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Json.hs 2020-09-03 16:29:18.000000000 +0000 @@ -89,6 +89,7 @@ instance ToJSON Amount instance ToJSON AmountStyle +instance ToJSON AmountPrecision instance ToJSON Side instance ToJSON DigitGroupStyle instance ToJSON MixedAmount @@ -158,6 +159,7 @@ instance FromJSON GenericSourcePos instance FromJSON Amount instance FromJSON AmountStyle +instance FromJSON AmountPrecision instance FromJSON Side instance FromJSON DigitGroupStyle instance FromJSON MixedAmount @@ -242,6 +244,7 @@ readJsonFile :: FromJSON a => FilePath -> IO a readJsonFile f = do bl <- BL.readFile f + -- PARTIAL: let v = fromMaybe (error $ "could not decode JSON in "++show f++" to target value") (decode bl :: Maybe Value) case fromJSON v :: FromJSON a => Result a of diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Ledger.hs haskell-hledger-lib-1.19.1/Hledger/Data/Ledger.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Ledger.hs 2020-01-28 17:23:35.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Ledger.hs 2020-09-01 17:33:33.000000000 +0000 @@ -17,7 +17,6 @@ ,ledgerRootAccount ,ledgerTopAccounts ,ledgerLeafAccounts - ,ledgerAccountsMatching ,ledgerPostings ,ledgerDateSpan ,ledgerCommodities @@ -26,8 +25,6 @@ where import qualified Data.Map as M --- import Data.Text (Text) -import qualified Data.Text as T import Safe (headDef) import Text.Printf @@ -90,10 +87,6 @@ ledgerLeafAccounts :: Ledger -> [Account] ledgerLeafAccounts = filter (null.asubs) . laccounts --- | Accounts in ledger whose name matches the pattern, in tree order. -ledgerAccountsMatching :: [String] -> Ledger -> [Account] -ledgerAccountsMatching pats = filter (matchpats pats . T.unpack . aname) . laccounts -- XXX pack - -- | List a ledger's postings, in the order parsed. ledgerPostings :: Ledger -> [Posting] ledgerPostings = journalPostings . ljournal diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Period.hs haskell-hledger-lib-1.19.1/Hledger/Data/Period.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Period.hs 2020-06-05 02:31:19.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Period.hs 2020-08-31 23:04:28.000000000 +0000 @@ -295,7 +295,7 @@ periodShrink today _ = YearPeriod y where (y,_,_) = toGregorian today -mondayBefore d = addDays (fromIntegral (1 - wd)) d +mondayBefore d = addDays (1 - toInteger wd) d where (_,_,wd) = toWeekDate d diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/PeriodicTransaction.hs haskell-hledger-lib-1.19.1/Hledger/Data/PeriodicTransaction.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/PeriodicTransaction.hs 2020-06-06 22:02:27.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/PeriodicTransaction.hs 2020-08-31 23:04:28.000000000 +0000 @@ -13,7 +13,7 @@ where #if !(MIN_VERSION_base(4,11,0)) -import Data.Monoid ((<>)) +import Data.Semigroup ((<>)) #endif import qualified Data.Text as T import Text.Printf @@ -38,7 +38,7 @@ t = T.pack str (i,s) = parsePeriodExpr' nulldate t case checkPeriodicTransactionStartDate i s t of - Just e -> error' e + Just e -> error' e -- PARTIAL: Nothing -> mapM_ (putStr . showTransaction) $ runPeriodicTransaction @@ -50,7 +50,7 @@ t = T.pack str (i,s) = parsePeriodExpr' nulldate t case checkPeriodicTransactionStartDate i s t of - Just e -> error' e + Just e -> error' e -- PARTIAL: Nothing -> mapM_ (putStr . showTransaction) $ runPeriodicTransaction @@ -85,6 +85,7 @@ -- - a generated-transaction: tag -- - a hidden _generated-transaction: tag which does not appear in the comment. -- +-- >>> import Data.Time (fromGregorian) -- >>> _ptgen "monthly from 2017/1 to 2017/4" -- 2017-01-01 -- ; generated-transaction: ~ monthly from 2017/1 to 2017/4 @@ -196,28 +197,28 @@ -- ... -- -- >>> _ptgen "weekly from 2017" --- *** Exception: Unable to generate transactions according to "weekly from 2017" because 2017-01-01 is not a first day of the week +-- *** Exception: Unable to generate transactions according to "weekly from 2017" because 2017-01-01 is not a first day of the Week -- -- >>> _ptgen "monthly from 2017/5/4" --- *** Exception: Unable to generate transactions according to "monthly from 2017/5/4" because 2017-05-04 is not a first day of the month +-- *** Exception: Unable to generate transactions according to "monthly from 2017/5/4" because 2017-05-04 is not a first day of the Month -- -- >>> _ptgen "every quarter from 2017/1/2" --- *** Exception: Unable to generate transactions according to "every quarter from 2017/1/2" because 2017-01-02 is not a first day of the quarter +-- *** Exception: Unable to generate transactions according to "every quarter from 2017/1/2" because 2017-01-02 is not a first day of the Quarter -- -- >>> _ptgen "yearly from 2017/1/14" --- *** Exception: Unable to generate transactions according to "yearly from 2017/1/14" because 2017-01-14 is not a first day of the year +-- *** Exception: Unable to generate transactions according to "yearly from 2017/1/14" because 2017-01-14 is not a first day of the Year -- --- >>> let reportperiod="daily from 2018/01/03" in let (i,s) = parsePeriodExpr' nulldate reportperiod in runPeriodicTransaction (nullperiodictransaction{ptperiodexpr=reportperiod, ptspan=s, ptinterval=i, ptpostings=["a" `post` usd 1]}) (DateSpan (Just $ parsedate "2018-01-01") (Just $ parsedate "2018-01-03")) +-- >>> let reportperiod="daily from 2018/01/03" in let (i,s) = parsePeriodExpr' nulldate reportperiod in runPeriodicTransaction (nullperiodictransaction{ptperiodexpr=reportperiod, ptspan=s, ptinterval=i, ptpostings=["a" `post` usd 1]}) (DateSpan (Just $ fromGregorian 2018 01 01) (Just $ fromGregorian 2018 01 03)) -- [] -- --- >>> _ptgenspan "every 3 months from 2019-05" (mkdatespan "2020-01-01" "2020-02-01") --- --- >>> _ptgenspan "every 3 months from 2019-05" (mkdatespan "2020-02-01" "2020-03-01") +-- >>> _ptgenspan "every 3 months from 2019-05" (DateSpan (Just $ fromGregorian 2020 01 01) (Just $ fromGregorian 2020 02 01)) +-- +-- >>> _ptgenspan "every 3 months from 2019-05" (DateSpan (Just $ fromGregorian 2020 02 01) (Just $ fromGregorian 2020 03 01)) -- 2020-02-01 -- ; generated-transaction: ~ every 3 months from 2019-05 -- a $1.00 -- --- >>> _ptgenspan "every 3 days from 2018" (mkdatespan "2018-01-01" "2018-01-05") +-- >>> _ptgenspan "every 3 days from 2018" (DateSpan (Just $ fromGregorian 2018 01 01) (Just $ fromGregorian 2018 01 05)) -- 2018-01-01 -- ; generated-transaction: ~ every 3 days from 2018 -- a $1.00 @@ -226,7 +227,7 @@ -- ; generated-transaction: ~ every 3 days from 2018 -- a $1.00 -- --- >>> _ptgenspan "every 3 days from 2018" (mkdatespan "2018-01-02" "2018-01-05") +-- >>> _ptgenspan "every 3 days from 2018" (DateSpan (Just $ fromGregorian 2018 01 02) (Just $ fromGregorian 2018 01 05)) -- 2018-01-04 -- ; generated-transaction: ~ every 3 days from 2018 -- a $1.00 @@ -252,27 +253,27 @@ -- If transaction does not have start/end date, we set them to start/end of requested span, -- to avoid generating (infinitely) many events. alltxnspans = dbg3 "alltxnspans" $ ptinterval `splitSpan` (spanDefaultsFrom ptspan requestedspan) - + -- | Check that this date span begins at a boundary of this interval, -- or return an explanatory error message including the provided period expression -- (from which the span and interval are derived). checkPeriodicTransactionStartDate :: Interval -> DateSpan -> T.Text -> Maybe String checkPeriodicTransactionStartDate i s periodexpr = case (i, spanStart s) of - (Weeks _, Just d) -> checkStart d "week" - (Months _, Just d) -> checkStart d "month" - (Quarters _, Just d) -> checkStart d "quarter" - (Years _, Just d) -> checkStart d "year" + (Weeks _, Just d) -> checkStart d Week + (Months _, Just d) -> checkStart d Month + (Quarters _, Just d) -> checkStart d Quarter + (Years _, Just d) -> checkStart d Year _ -> Nothing where checkStart d x = - let firstDate = fixSmartDate d ("","this",x) + let firstDate = fixSmartDate d $ SmartRelative This x in if d == firstDate then Nothing else Just $ "Unable to generate transactions according to "++show (T.unpack periodexpr) - ++" because "++show d++" is not a first day of the "++x + ++" because "++show d++" is not a first day of the "++show x ---- | What is the interval of this 'PeriodicTransaction's period expression, if it can be parsed ? --periodTransactionInterval :: PeriodicTransaction -> Maybe Interval diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Posting.hs haskell-hledger-lib-1.19.1/Hledger/Data/Posting.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Posting.hs 2020-06-04 21:01:48.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Posting.hs 2020-09-01 17:33:33.000000000 +0000 @@ -68,6 +68,7 @@ ) where +import Control.Monad (foldM) import Data.Foldable (asum) import Data.List.Extra (nubSort) import qualified Data.Map as M @@ -289,17 +290,18 @@ -- | Rewrite an account name using all matching aliases from the given list, in sequence. -- Each alias sees the result of applying the previous aliases. -accountNameApplyAliases :: [AccountAlias] -> AccountName -> AccountName -accountNameApplyAliases aliases a = accountNameWithPostingType atype aname' - where - (aname,atype) = (accountNameWithoutPostingType a, accountNamePostingType a) - aname' = foldl - (\acct alias -> dbg6 "result" $ aliasReplace (dbg6 "alias" alias) (dbg6 "account" acct)) - aname - aliases +-- Or, return any error arising from a bad regular expression in the aliases. +accountNameApplyAliases :: [AccountAlias] -> AccountName -> Either RegexError AccountName +accountNameApplyAliases aliases a = + let (aname,atype) = (accountNameWithoutPostingType a, accountNamePostingType a) + in foldM + (\acct alias -> dbg6 "result" $ aliasReplace (dbg6 "alias" alias) (dbg6 "account" acct)) + aname + aliases + >>= Right . accountNameWithPostingType atype -- | Memoising version of accountNameApplyAliases, maybe overkill. -accountNameApplyAliasesMemo :: [AccountAlias] -> AccountName -> AccountName +accountNameApplyAliasesMemo :: [AccountAlias] -> AccountName -> Either RegexError AccountName accountNameApplyAliasesMemo aliases = memo (accountNameApplyAliases aliases) -- XXX re-test this memoisation @@ -307,11 +309,13 @@ -- aliasMatches (BasicAlias old _) a = old `isAccountNamePrefixOf` a -- aliasMatches (RegexAlias re _) a = regexMatchesCI re a -aliasReplace :: AccountAlias -> AccountName -> AccountName +aliasReplace :: AccountAlias -> AccountName -> Either RegexError AccountName aliasReplace (BasicAlias old new) a - | old `isAccountNamePrefixOf` a || old == a = new <> T.drop (T.length old) a - | otherwise = a -aliasReplace (RegexAlias re repl) a = T.pack $ regexReplaceCIMemo re repl $ T.unpack a -- XXX + | old `isAccountNamePrefixOf` a || old == a = + Right $ new <> T.drop (T.length old) a + | otherwise = Right a +aliasReplace (RegexAlias re repl) a = + fmap T.pack . regexReplace re repl $ T.unpack a -- XXX -- | Apply a specified valuation to this posting's amount, using the -- provided price oracle, commodity styles, reference dates, and @@ -361,7 +365,7 @@ | T.null c' = tag | otherwise = c' `commentJoin` tag where - c' = textchomp c + c' = T.stripEnd c tag = t <> ": " <> v -- | Add a tag on its own line to a comment, preserving any prior content. diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/RawOptions.hs haskell-hledger-lib-1.19.1/Hledger/Data/RawOptions.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/RawOptions.hs 2020-01-28 17:23:35.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/RawOptions.hs 2020-09-01 17:33:33.000000000 +0000 @@ -1,5 +1,3 @@ -{-# LANGUAGE DeriveDataTypeable #-} - {-| hledger's cmdargs modes parse command-line arguments to an @@ -21,22 +19,23 @@ maybestringopt, listofstringopt, intopt, + posintopt, maybeintopt, + maybeposintopt, maybecharopt ) where -import Data.Maybe -import Data.Data -import Data.Default -import Safe +import Data.Maybe (fromMaybe, isJust, mapMaybe) +import Data.Default (Default(..)) +import Safe (headMay, lastMay, readDef) import Hledger.Utils -- | The result of running cmdargs: an association list of option names to string values. newtype RawOpts = RawOpts { unRawOpts :: [(String,String)] } - deriving (Show, Data, Typeable) + deriving (Show) instance Default RawOpts where def = RawOpts [] @@ -59,6 +58,7 @@ -- for which the given predicate returns a Just value. -- Useful for exclusive choice flags like --daily|--weekly|--quarterly... -- +-- >>> import Safe (readMay) -- >>> choiceopt Just (RawOpts [("a",""), ("b",""), ("c","")]) -- Just "c" -- >>> choiceopt (const Nothing) (RawOpts [("a","")]) @@ -91,12 +91,35 @@ listofstringopt :: String -> RawOpts -> [String] listofstringopt name (RawOpts rawopts) = [v | (k,v) <- rawopts, k==name] +-- | Reads the named option's Int argument, if it is present. +-- An argument that is too small or too large will raise an error. maybeintopt :: String -> RawOpts -> Maybe Int -maybeintopt name rawopts = - let ms = maybestringopt name rawopts in - case ms of Nothing -> Nothing - Just s -> Just $ readDef (usageError $ "could not parse "++name++" number: "++s) s +maybeintopt = maybeclippedintopt minBound maxBound + +-- | Reads the named option's natural-number argument, if it is present. +-- An argument that is negative or too large will raise an error. +maybeposintopt :: String -> RawOpts -> Maybe Int +maybeposintopt = maybeclippedintopt 0 maxBound +-- | Reads the named option's Int argument. If not present it will +-- return 0. An argument that is too small or too large will raise an error. intopt :: String -> RawOpts -> Int intopt name = fromMaybe 0 . maybeintopt name +-- | Reads the named option's natural-number argument. If not present it will +-- return 0. An argument that is negative or too large will raise an error. +posintopt :: String -> RawOpts -> Int +posintopt name = fromMaybe 0 . maybeposintopt name + +-- | Reads the named option's Int argument, if it is present. An argument +-- that does not fit within the given bounds will raise an error. +maybeclippedintopt :: Int -> Int -> String -> RawOpts -> Maybe Int +maybeclippedintopt minVal maxVal name = + fmap (intOrError . readOrError) . maybestringopt name + where + readOrError s = readDef (usageError $ "could not parse " ++ name ++ " number: " ++ s) s + intOrError n | n >= toInteger minVal && n <= toInteger maxVal = fromInteger n + | otherwise = usageError $ "argument to " ++ name + ++ " must lie in the range " + ++ show minVal ++ " to " ++ show maxVal + ++ ", but is " ++ show n diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Timeclock.hs haskell-hledger-lib-1.19.1/Hledger/Data/Timeclock.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Timeclock.hs 2020-01-28 17:23:35.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Timeclock.hs 2020-08-31 23:04:28.000000000 +0000 @@ -6,7 +6,7 @@ -} -{-# LANGUAGE CPP, OverloadedStrings #-} +{-# LANGUAGE OverloadedStrings #-} module Hledger.Data.Timeclock ( timeclockEntriesToTransactions @@ -21,9 +21,6 @@ import Data.Time.Clock import Data.Time.Format import Data.Time.LocalTime -#if !(MIN_VERSION_time(1,5,0)) -import System.Locale (defaultTimeLocale) -#endif import Text.Printf import Hledger.Utils @@ -94,7 +91,7 @@ entryFromTimeclockInOut i o | otime >= itime = t | otherwise = - error' $ "clock-out time less than clock-in time in:\n" ++ showTransaction t + error' $ "clock-out time less than clock-in time in:\n" ++ showTransaction t -- PARTIAL: where t = Transaction { tindex = 0, @@ -136,11 +133,7 @@ yesterday = prevday today clockin = TimeclockEntry nullsourcepos In mktime d = LocalTime d . fromMaybe midnight . -#if MIN_VERSION_time(1,5,0) parseTimeM True defaultTimeLocale "%H:%M:%S" -#else - parseTime defaultTimeLocale "%H:%M:%S" -#endif showtime = formatTime defaultTimeLocale "%H:%M" txndescs = map (T.unpack . tdescription) . timeclockEntriesToTransactions now future = utcToLocalTime tz $ addUTCTime 100 now' diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Transaction.hs haskell-hledger-lib-1.19.1/Hledger/Data/Transaction.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Transaction.hs 2020-06-14 16:01:00.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Transaction.hs 2020-08-31 23:04:28.000000000 +0000 @@ -105,8 +105,8 @@ } -- | Make a simple transaction with the given date and postings. -transaction :: String -> [Posting] -> Transaction -transaction datestr ps = txnTieKnot $ nulltransaction{tdate=parsedate datestr, tpostings=ps} +transaction :: Day -> [Posting] -> Transaction +transaction day ps = txnTieKnot $ nulltransaction{tdate=day, tpostings=ps} transactionPayee :: Transaction -> Text transactionPayee = fst . payeeAndNoteFromDescription . tdescription @@ -121,7 +121,7 @@ payeeAndNoteFromDescription :: Text -> (Text,Text) payeeAndNoteFromDescription t | T.null n = (t, t) - | otherwise = (textstrip p, textstrip $ T.drop 1 n) + | otherwise = (T.strip p, T.strip $ T.drop 1 n) where (p, n) = T.span (/= '|') t @@ -302,11 +302,10 @@ -- appropriately bracketed/parenthesised for the given posting type. showAccountName :: Maybe Int -> PostingType -> AccountName -> String showAccountName w = fmt - where - fmt RegularPosting = take w' . T.unpack - fmt VirtualPosting = parenthesise . reverse . take (w'-2) . reverse . T.unpack - fmt BalancedVirtualPosting = bracket . reverse . take (w'-2) . reverse . T.unpack - w' = fromMaybe 999999 w + where + fmt RegularPosting = maybe id take w . T.unpack + fmt VirtualPosting = parenthesise . maybe id (takeEnd . subtract 2) w . T.unpack + fmt BalancedVirtualPosting = bracket . maybe id (takeEnd . subtract 2) w . T.unpack parenthesise :: String -> String parenthesise s = "("++s++")" @@ -541,15 +540,20 @@ where fromcommodity = head $ filter (`elem` sumcommodities) pcommodities -- these heads are ugly but should be safe conversionprice - | fromcount==1 = TotalPrice $ abs toamount `withPrecision` maxprecision + | fromcount==1 = TotalPrice $ abs toamount `withPrecision` NaturalPrecision | otherwise = UnitPrice $ abs unitprice `withPrecision` unitprecision where fromcount = length $ filter ((==fromcommodity).acommodity) pamounts fromamount = head $ filter ((==fromcommodity).acommodity) sumamounts + fromprecision = asprecision $ astyle fromamount tocommodity = head $ filter (/=fromcommodity) sumcommodities toamount = head $ filter ((==tocommodity).acommodity) sumamounts + toprecision = asprecision $ astyle toamount unitprice = (aquantity fromamount) `divideAmount` toamount - unitprecision = max 2 (asprecision (astyle toamount) + asprecision (astyle fromamount)) + -- Sum two display precisions, capping the result at the maximum bound + unitprecision = case (fromprecision, toprecision) of + (Precision a, Precision b) -> Precision $ if maxBound - a < b then maxBound else max 2 (a + b) + _ -> NaturalPrecision inferprice p = p -- Get a transaction's secondary date, defaulting to the primary date. @@ -670,8 +674,8 @@ test "null transaction" $ showTransaction nulltransaction @?= "0000-01-01\n\n" , test "non-null transaction" $ showTransaction nulltransaction - { tdate = parsedate "2012/05/14" - , tdate2 = Just $ parsedate "2012/05/15" + { tdate = fromGregorian 2012 05 14 + , tdate2 = Just $ fromGregorian 2012 05 15 , tstatus = Unmarked , tcode = "code" , tdescription = "desc" @@ -703,7 +707,7 @@ 0 "" nullsourcepos - (parsedate "2007/01/28") + (fromGregorian 2007 01 28) Nothing Unmarked "" @@ -727,7 +731,7 @@ 0 "" nullsourcepos - (parsedate "2007/01/28") + (fromGregorian 2007 01 28) Nothing Unmarked "" @@ -750,7 +754,7 @@ 0 "" nullsourcepos - (parsedate "2007/01/28") + (fromGregorian 2007 01 28) Nothing Unmarked "" @@ -766,14 +770,14 @@ 0 "" nullsourcepos - (parsedate "2010/01/01") + (fromGregorian 2010 01 01) Nothing Unmarked "" "x" "" [] - [ posting {paccount = "a", pamount = Mixed [num 1 `at` (usd 2 `withPrecision` 0)]} + [ posting {paccount = "a", pamount = Mixed [num 1 `at` (usd 2 `withPrecision` Precision 0)]} , posting {paccount = "b", pamount = missingmixedamt} ])) @?= (unlines ["2010-01-01 x", " a 1 @ $2", " b", ""]) @@ -787,7 +791,7 @@ 0 "" nullsourcepos - (parsedate "2007/01/28") + (fromGregorian 2007 01 28) Nothing Unmarked "" @@ -803,7 +807,7 @@ 0 "" nullsourcepos - (parsedate "2007/01/28") + (fromGregorian 2007 01 28) Nothing Unmarked "" @@ -821,7 +825,7 @@ 0 "" nullsourcepos - (parsedate "2007/01/28") + (fromGregorian 2007 01 28) Nothing Unmarked "" @@ -838,7 +842,7 @@ 0 "" nullsourcepos - (parsedate "2007/01/28") + (fromGregorian 2007 01 28) Nothing Unmarked "" @@ -848,7 +852,7 @@ [ posting {paccount = "a", pamount = Mixed [usd 1.35]} , posting {paccount = "b", pamount = Mixed [eur (-1)]} ])) @?= - Right (Mixed [usd 1.35 @@ (eur 1 `withPrecision` maxprecision)]) + Right (Mixed [usd 1.35 @@ (eur 1 `withPrecision` NaturalPrecision)]) ,test "balanceTransaction balances based on cost if there are unit prices" $ assertRight $ balanceTransaction @@ -857,7 +861,7 @@ 0 "" nullsourcepos - (parsedate "2011/01/01") + (fromGregorian 2011 01 01) Nothing Unmarked "" @@ -875,7 +879,7 @@ 0 "" nullsourcepos - (parsedate "2011/01/01") + (fromGregorian 2011 01 01) Nothing Unmarked "" @@ -894,7 +898,7 @@ 0 "" nullsourcepos - (parsedate "2009/01/01") + (fromGregorian 2009 01 01) Nothing Unmarked "" @@ -912,7 +916,7 @@ 0 "" nullsourcepos - (parsedate "2009/01/01") + (fromGregorian 2009 01 01) Nothing Unmarked "" @@ -930,7 +934,7 @@ 0 "" nullsourcepos - (parsedate "2009/01/01") + (fromGregorian 2009 01 01) Nothing Unmarked "" @@ -945,7 +949,7 @@ 0 "" nullsourcepos - (parsedate "2009/01/01") + (fromGregorian 2009 01 01) Nothing Unmarked "" @@ -960,7 +964,7 @@ 0 "" nullsourcepos - (parsedate "2009/01/01") + (fromGregorian 2009 01 01) Nothing Unmarked "" @@ -979,7 +983,7 @@ 0 "" nullsourcepos - (parsedate "2009/01/01") + (fromGregorian 2009 01 01) Nothing Unmarked "" @@ -997,7 +1001,7 @@ 0 "" nullsourcepos - (parsedate "2009/01/01") + (fromGregorian 2009 01 01) Nothing Unmarked "" diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/TransactionModifier.hs haskell-hledger-lib-1.19.1/Hledger/Data/TransactionModifier.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/TransactionModifier.hs 2020-06-06 22:02:27.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/TransactionModifier.hs 2020-08-29 21:29:10.000000000 +0000 @@ -1,3 +1,4 @@ +{-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE ViewPatterns #-} {-# LANGUAGE CPP #-} @@ -25,7 +26,6 @@ import Hledger.Data.Transaction import Hledger.Query import Hledger.Data.Posting (commentJoin, commentAddTag) -import Hledger.Utils.UTF8IOCompat (error') import Hledger.Utils.Debug -- $setup @@ -35,64 +35,57 @@ -- >>> import Hledger.Data.Journal -- | Apply all the given transaction modifiers, in turn, to each transaction. -modifyTransactions :: [TransactionModifier] -> [Transaction] -> [Transaction] -modifyTransactions tmods = map applymods - where - applymods t = taggedt' +-- Or if any of them fails to be parsed, return the first error. A reference +-- date is provided to help interpret relative dates in transaction modifier +-- queries. +modifyTransactions :: Day -> [TransactionModifier] -> [Transaction] -> Either String [Transaction] +modifyTransactions d tmods ts = do + fs <- mapM (transactionModifierToFunction d) tmods -- convert modifiers to functions, or return a parse error + let + modifytxn t = t'' where - t' = foldr (flip (.) . transactionModifierToFunction) id tmods t - taggedt' - -- PERF: compares txns to see if any modifier had an effect, inefficient ? - | t' /= t = t'{tcomment = tcomment t' `commentAddTag` ("modified","") - ,ttags = ("modified","") : ttags t' - } - | otherwise = t' + t' = foldr (flip (.)) id fs t -- apply each function in turn + t'' = if t' == t -- and add some tags if it was changed + then t' + else t'{tcomment=tcomment t' `commentAddTag` ("modified",""), ttags=("modified","") : ttags t'} + Right $ map modifytxn ts --- | Converts a 'TransactionModifier' to a 'Transaction'-transforming function, +-- | Converts a 'TransactionModifier' to a 'Transaction'-transforming function -- which applies the modification(s) specified by the TransactionModifier. --- Currently this means adding automated postings when certain other postings are present. +-- Or, returns the error message there is a problem parsing the TransactionModifier's query. +-- A reference date is provided to help interpret relative dates in the query. +-- -- The postings of the transformed transaction will reference it in the usual -- way (ie, 'txnTieKnot' is called). -- --- >>> putStr $ showTransaction $ transactionModifierToFunction (TransactionModifier "" ["pong" `post` usd 2]) nulltransaction{tpostings=["ping" `post` usd 1]} +-- Currently the only kind of modification possible is adding automated +-- postings when certain other postings are present. +-- +-- >>> t = nulltransaction{tpostings=["ping" `post` usd 1]} +-- >>> test = either putStr (putStr.showTransaction) . fmap ($ t) . transactionModifierToFunction nulldate +-- >>> test $ TransactionModifier "" ["pong" `post` usd 2] -- 0000-01-01 -- ping $1.00 -- pong $2.00 ; generated-posting: = -- --- >>> putStr $ showTransaction $ transactionModifierToFunction (TransactionModifier "miss" ["pong" `post` usd 2]) nulltransaction{tpostings=["ping" `post` usd 1]} +-- >>> test $ TransactionModifier "miss" ["pong" `post` usd 2] -- 0000-01-01 -- ping $1.00 -- --- >>> putStr $ showTransaction $ transactionModifierToFunction (TransactionModifier "ping" ["pong" `post` amount{aismultiplier=True, aquantity=3}]) nulltransaction{tpostings=["ping" `post` usd 2]} +-- >>> test $ TransactionModifier "ping" ["pong" `post` amount{aismultiplier=True, aquantity=3}] -- 0000-01-01 --- ping $2.00 --- pong $6.00 ; generated-posting: = ping +-- ping $1.00 +-- pong $3.00 ; generated-posting: = ping -- -- -transactionModifierToFunction :: TransactionModifier -> (Transaction -> Transaction) -transactionModifierToFunction mt = - \t@(tpostings -> ps) -> txnTieKnot t{ tpostings=generatePostings ps } - where - q = simplifyQuery $ tmParseQuery mt (error' "a transaction modifier's query cannot depend on current date") - mods = map (tmPostingRuleToFunction (tmquerytxt mt)) $ tmpostingrules mt +transactionModifierToFunction :: Day -> TransactionModifier -> Either String (Transaction -> Transaction) +transactionModifierToFunction refdate TransactionModifier{tmquerytxt, tmpostingrules} = do + q <- simplifyQuery . fst <$> parseQuery refdate tmquerytxt + let + fs = map (tmPostingRuleToFunction tmquerytxt) tmpostingrules generatePostings ps = [p' | p <- ps - , p' <- if q `matchesPosting` p then p:[ m p | m <- mods] else [p]] - --- | Parse the 'Query' from a 'TransactionModifier's 'tmquerytxt', --- and return it as a function requiring the current date. --- --- >>> tmParseQuery (TransactionModifier "" []) undefined --- Any --- >>> tmParseQuery (TransactionModifier "ping" []) undefined --- Acct "ping" --- >>> tmParseQuery (TransactionModifier "date:2016" []) undefined --- Date (DateSpan 2016) --- >>> tmParseQuery (TransactionModifier "date:today" []) (read "2017-01-01") --- Date (DateSpan 2017-01-01) --- >>> tmParseQuery (TransactionModifier "date:today" []) (read "2017-01-01") --- Date (DateSpan 2017-01-01) -tmParseQuery :: TransactionModifier -> (Day -> Query) -tmParseQuery mt = fst . flip parseQuery (tmquerytxt mt) + , p' <- if q `matchesPosting` p then p:[f p | f <- fs] else [p]] + Right $ \t@(tpostings -> ps) -> txnTieKnot t{tpostings=generatePostings ps} -- | Converts a 'TransactionModifier''s posting rule to a 'Posting'-generating function, -- which will be used to make a new posting based on the old one (an "automated posting"). diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Types.hs haskell-hledger-lib-1.19.1/Hledger/Data/Types.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Types.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Types.hs 2020-09-01 17:33:33.000000000 +0000 @@ -1,5 +1,3 @@ -{-# LANGUAGE DeriveDataTypeable, StandaloneDeriving, DeriveGeneric, TypeSynonymInstances, FlexibleInstances, OverloadedStrings #-} -{-# LANGUAGE RecordWildCards #-} {-| Most data types are defined here to avoid import cycles. @@ -18,12 +16,18 @@ -} +-- {-# LANGUAGE DeriveAnyClass #-} -- https://hackage.haskell.org/package/deepseq-1.4.4.0/docs/Control-DeepSeq.html#v:rnf +{-# LANGUAGE DeriveGeneric #-} +{-# LANGUAGE FlexibleInstances #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE RecordWildCards #-} +{-# LANGUAGE StandaloneDeriving #-} +{-# LANGUAGE TypeSynonymInstances #-} + module Hledger.Data.Types where import GHC.Generics (Generic) -import Control.DeepSeq (NFData) -import Data.Data import Data.Decimal import Data.Default import Data.Functor (($>)) @@ -38,25 +42,42 @@ -- import qualified Data.Text as T import Data.Time.Calendar import Data.Time.LocalTime +import Data.Word (Word8) import System.Time (ClockTime(..)) import Text.Printf import Hledger.Utils.Regex --- | A possibly incomplete date, whose missing parts will be filled from a reference date. --- A numeric year, month, and day of month, or the empty string for any of these. --- See the smartdate parser. -type SmartDate = (String,String,String) +-- | A possibly incomplete year-month-day date provided by the user, to be +-- interpreted as either a date or a date span depending on context. Missing +-- parts "on the left" will be filled from the provided reference date, e.g. if +-- the year and month are missing, the reference date's year and month are used. +-- Missing parts "on the right" are assumed, when interpreting as a date, to be +-- 1, (e.g. if the year and month are present but the day is missing, it means +-- first day of that month); or when interpreting as a date span, to be a +-- wildcard (so it would mean all days of that month). See the `smartdate` +-- parser for more examples. +-- +-- Or, one of the standard periods and an offset relative to the reference date: +-- (last|this|next) (day|week|month|quarter|year), where "this" means the period +-- containing the reference date. +data SmartDate + = SmartAssumeStart Year (Maybe (Month, Maybe MonthDay)) + | SmartFromReference (Maybe Month) MonthDay + | SmartMonth Month + | SmartRelative SmartSequence SmartInterval + deriving (Show) + +data SmartSequence = Last | This | Next deriving (Show) +data SmartInterval = Day | Week | Month | Quarter | Year deriving (Show) data WhichDate = PrimaryDate | SecondaryDate deriving (Eq,Show) -data DateSpan = DateSpan (Maybe Day) (Maybe Day) deriving (Eq,Ord,Data,Generic,Typeable) +data DateSpan = DateSpan (Maybe Day) (Maybe Day) deriving (Eq,Ord,Generic) instance Default DateSpan where def = DateSpan Nothing Nothing -instance NFData DateSpan - -- synonyms for various date-related scalars type Year = Integer type Month = Int -- 1-12 @@ -79,7 +100,7 @@ | PeriodFrom Day | PeriodTo Day | PeriodAll - deriving (Eq,Ord,Show,Data,Generic,Typeable) + deriving (Eq,Ord,Show,Generic) instance Default Period where def = PeriodAll @@ -90,7 +111,7 @@ -- MonthLong -- QuarterLong -- YearLong --- deriving (Eq,Ord,Show,Data,Generic,Typeable) +-- deriving (Eq,Ord,Show,Generic) -- Ways in which a period can be divided into subperiods. data Interval = @@ -107,12 +128,10 @@ -- WeekOfYear Int -- MonthOfYear Int -- QuarterOfYear Int - deriving (Eq,Show,Ord,Data,Generic,Typeable) + deriving (Eq,Show,Ord,Generic) instance Default Interval where def = NoInterval -instance NFData Interval - type AccountName = Text data AccountType = @@ -121,9 +140,8 @@ | Equity | Revenue | Expense - deriving (Show,Eq,Ord,Data,Generic) - -instance NFData AccountType + | Cash -- ^ a subtype of Asset - liquid assets to show in cashflow report + deriving (Show,Eq,Ord,Generic) -- not worth the trouble, letters defined in accountdirectivep for now --instance Read AccountType @@ -137,17 +155,12 @@ data AccountAlias = BasicAlias AccountName AccountName | RegexAlias Regexp Replacement - deriving (Eq, Read, Show, Ord, Data, Generic, Typeable) - -instance NFData AccountAlias + deriving (Eq, Read, Show, Ord, Generic) -data Side = L | R deriving (Eq,Show,Read,Ord,Typeable,Data,Generic) - -instance NFData Side +data Side = L | R deriving (Eq,Show,Read,Ord,Generic) -- | The basic numeric type used in amounts. type Quantity = Decimal -deriving instance Data Quantity -- The following is for hledger-web, and requires blaze-markup. -- Doing it here avoids needing a matching flag on the hledger-web package. instance ToMarkup Quantity @@ -158,20 +171,16 @@ -- commodity, as recorded in the journal entry eg with @ or @@. -- Docs call this "transaction price". The amount is always positive. data AmountPrice = UnitPrice Amount | TotalPrice Amount - deriving (Eq,Ord,Typeable,Data,Generic,Show) - -instance NFData AmountPrice + deriving (Eq,Ord,Generic,Show) -- | Display style for an amount. data AmountStyle = AmountStyle { ascommodityside :: Side, -- ^ does the symbol appear on the left or the right ? ascommodityspaced :: Bool, -- ^ space between symbol and quantity ? - asprecision :: !Int, -- ^ number of digits displayed after the decimal point + asprecision :: !AmountPrecision, -- ^ number of digits displayed after the decimal point asdecimalpoint :: Maybe Char, -- ^ character used as decimal point: period or comma. Nothing means "unspecified, use default" asdigitgroups :: Maybe DigitGroupStyle -- ^ style for displaying digit groups, if any -} deriving (Eq,Ord,Read,Typeable,Data,Generic) - -instance NFData AmountStyle +} deriving (Eq,Ord,Read,Generic) instance Show AmountStyle where show AmountStyle{..} = @@ -182,25 +191,23 @@ (show asdecimalpoint) (show asdigitgroups) +data AmountPrecision = Precision !Word8 | NaturalPrecision deriving (Eq,Ord,Read,Show,Generic) + -- | A style for displaying digit groups in the integer part of a -- floating point number. It consists of the character used to -- separate groups (comma or period, whichever is not used as decimal -- point), and the size of each group, starting with the one nearest -- the decimal point. The last group size is assumed to repeat. Eg, -- comma between thousands is DigitGroups ',' [3]. -data DigitGroupStyle = DigitGroups Char [Int] - deriving (Eq,Ord,Read,Show,Typeable,Data,Generic) - -instance NFData DigitGroupStyle +data DigitGroupStyle = DigitGroups Char [Word8] + deriving (Eq,Ord,Read,Show,Generic) type CommoditySymbol = Text data Commodity = Commodity { csymbol :: CommoditySymbol, cformat :: Maybe AmountStyle - } deriving (Show,Eq,Data,Generic) --,Ord,Typeable,Data,Generic) - -instance NFData Commodity + } deriving (Show,Eq,Generic) --,Ord) data Amount = Amount { acommodity :: CommoditySymbol, -- commodity symbol, or special value "AUTO" @@ -209,18 +216,12 @@ -- in a TMPostingRule. In a regular Posting, should always be false. astyle :: AmountStyle, aprice :: Maybe AmountPrice -- ^ the (fixed, transaction-specific) price for this amount, if any - } deriving (Eq,Ord,Typeable,Data,Generic,Show) - -instance NFData Amount + } deriving (Eq,Ord,Generic,Show) -newtype MixedAmount = Mixed [Amount] deriving (Eq,Ord,Typeable,Data,Generic,Show) - -instance NFData MixedAmount +newtype MixedAmount = Mixed [Amount] deriving (Eq,Ord,Generic,Show) data PostingType = RegularPosting | VirtualPosting | BalancedVirtualPosting - deriving (Eq,Show,Typeable,Data,Generic) - -instance NFData PostingType + deriving (Eq,Show,Generic) type TagName = Text type TagValue = Text @@ -230,9 +231,7 @@ -- | The status of a transaction or posting, recorded with a status mark -- (nothing, !, or *). What these mean is ultimately user defined. data Status = Unmarked | Pending | Cleared - deriving (Eq,Ord,Bounded,Enum,Typeable,Data,Generic) - -instance NFData Status + deriving (Eq,Ord,Bounded,Enum,Generic) instance Show Status where -- custom show.. bad idea.. don't do it.. show Unmarked = "" @@ -281,9 +280,7 @@ batotal :: Bool, -- ^ disallow additional non-asserted commodities ? bainclusive :: Bool, -- ^ include subaccounts when calculating the actual balance ? baposition :: GenericSourcePos -- ^ the assertion's file position, for error reporting - } deriving (Eq,Typeable,Data,Generic,Show) - -instance NFData BalanceAssertion + } deriving (Eq,Generic,Show) data Posting = Posting { pdate :: Maybe Day, -- ^ this posting's date, if different from the transaction's @@ -302,9 +299,7 @@ -- (eg its amount or price was inferred, or the account name was -- changed by a pivot or budget report), this references the original -- untransformed posting (which will have Nothing in this field). - } deriving (Typeable,Data,Generic) - -instance NFData Posting + } deriving (Generic) -- The equality test for postings ignores the parent transaction's -- identity, to avoid recurring ad infinitum. @@ -332,9 +327,7 @@ -- | The position of parse errors (eg), like parsec's SourcePos but generic. data GenericSourcePos = GenericSourcePos FilePath Int Int -- ^ file path, 1-based line number and 1-based column number. | JournalSourcePos FilePath (Int, Int) -- ^ file path, inclusive range of 1-based line numbers (first, last). - deriving (Eq, Read, Show, Ord, Data, Generic, Typeable) - -instance NFData GenericSourcePos + deriving (Eq, Read, Show, Ord, Generic) --{-# ANN Transaction "HLint: ignore" #-} -- Ambiguous type variable ‘p0’ arising from an annotation @@ -352,9 +345,7 @@ tcomment :: Text, -- ^ this transaction's comment lines, as a single non-indented multi-line string ttags :: [Tag], -- ^ tag names and values, extracted from the comment tpostings :: [Posting] -- ^ this transaction's postings - } deriving (Eq,Typeable,Data,Generic,Show) - -instance NFData Transaction + } deriving (Eq,Generic,Show) -- | A transaction modifier rule. This has a query which matches postings -- in the journal, and a list of transformations to apply to those @@ -364,9 +355,7 @@ data TransactionModifier = TransactionModifier { tmquerytxt :: Text, tmpostingrules :: [TMPostingRule] - } deriving (Eq,Typeable,Data,Generic,Show) - -instance NFData TransactionModifier + } deriving (Eq,Generic,Show) nulltransactionmodifier = TransactionModifier{ tmquerytxt = "" @@ -391,7 +380,7 @@ ptcomment :: Text, pttags :: [Tag], ptpostings :: [Posting] - } deriving (Eq,Typeable,Data,Generic) -- , Show in PeriodicTransaction.hs + } deriving (Eq,Generic) -- , Show in PeriodicTransaction.hs nullperiodictransaction = PeriodicTransaction{ ptperiodexpr = "" @@ -405,11 +394,7 @@ ,ptpostings = [] } -instance NFData PeriodicTransaction - -data TimeclockCode = SetBalance | SetRequiredHours | In | Out | FinalOut deriving (Eq,Ord,Typeable,Data,Generic) - -instance NFData TimeclockCode +data TimeclockCode = SetBalance | SetRequiredHours | In | Out | FinalOut deriving (Eq,Ord,Generic) data TimeclockEntry = TimeclockEntry { tlsourcepos :: GenericSourcePos, @@ -417,9 +402,7 @@ tldatetime :: LocalTime, tlaccount :: AccountName, tldescription :: Text - } deriving (Eq,Ord,Typeable,Data,Generic) - -instance NFData TimeclockEntry + } deriving (Eq,Ord,Generic) -- | A market price declaration made by the journal format's P directive. -- It declares two things: a historical exchange rate between two commodities, @@ -428,11 +411,9 @@ pddate :: Day ,pdcommodity :: CommoditySymbol ,pdamount :: Amount - } deriving (Eq,Ord,Typeable,Data,Generic,Show) + } deriving (Eq,Ord,Generic,Show) -- Show instance derived in Amount.hs (XXX why ?) -instance NFData PriceDirective - -- | A historical market price (exchange rate) from one commodity to another. -- A more concise form of a PriceDirective, without the amount display info. data MarketPrice = MarketPrice { @@ -440,11 +421,9 @@ ,mpfrom :: CommoditySymbol -- ^ The commodity being converted from. ,mpto :: CommoditySymbol -- ^ The commodity being converted to. ,mprate :: Quantity -- ^ One unit of the "from" commodity is worth this quantity of the "to" commodity. - } deriving (Eq,Ord,Typeable,Data,Generic) + } deriving (Eq,Ord,Generic) -- Show instance derived in Amount.hs (XXX why ?) -instance NFData MarketPrice - -- additional valuation-related types in Valuation.hs -- | A Journal, containing transactions and various other things. @@ -481,13 +460,9 @@ -- any included journal files. The main file is first, -- followed by any included files in the order encountered. ,jlastreadtime :: ClockTime -- ^ when this journal was last read from its file(s) - } deriving (Eq, Typeable, Data, Generic) + } deriving (Eq, Generic) -deriving instance Data ClockTime -deriving instance Typeable ClockTime deriving instance Generic ClockTime -instance NFData ClockTime -instance NFData Journal -- | A journal in the process of being parsed, not yet finalised. -- The data is partial, and list fields are in reverse order. @@ -504,9 +479,7 @@ ,aditags :: [Tag] -- ^ tags extracted from the account comment, if any ,adideclarationorder :: Int -- ^ the order in which this account was declared, -- relative to other account declarations, during parsing (1..) -} deriving (Eq,Show,Data,Generic) - -instance NFData AccountDeclarationInfo +} deriving (Eq,Show,Generic) nullaccountdeclarationinfo = AccountDeclarationInfo { adicomment = "" @@ -527,14 +500,14 @@ ,anumpostings :: Int -- ^ the number of postings to this account ,aebalance :: MixedAmount -- ^ this account's balance, excluding subaccounts ,aibalance :: MixedAmount -- ^ this account's balance, including subaccounts - } deriving (Typeable, Data, Generic) + } deriving (Generic) -- | Whether an account's balance is normally a positive number (in -- accounting terms, a debit balance) or a negative number (credit balance). -- Assets and expenses are normally positive (debit), while liabilities, equity -- and income are normally negative (credit). -- https://en.wikipedia.org/wiki/Normal_balance -data NormalSign = NormallyPositive | NormallyNegative deriving (Show, Data, Eq) +data NormalSign = NormallyPositive | NormallyNegative deriving (Show, Eq) -- | A Ledger has the journal it derives from, and the accounts -- derived from that. Accounts are accessible both list-wise and diff -Nru haskell-hledger-lib-1.18.1/Hledger/Data/Valuation.hs haskell-hledger-lib-1.19.1/Hledger/Data/Valuation.hs --- haskell-hledger-lib-1.18.1/Hledger/Data/Valuation.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Data/Valuation.hs 2020-09-01 17:33:33.000000000 +0000 @@ -9,7 +9,7 @@ {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} -{-# LANGUAGE DeriveDataTypeable, DeriveGeneric #-} +{-# LANGUAGE DeriveGeneric #-} module Hledger.Data.Valuation ( ValuationType(..) @@ -28,8 +28,6 @@ where import Control.Applicative ((<|>)) -import Control.DeepSeq (NFData) -import Data.Data import Data.Decimal (roundTo) import Data.Function ((&), on) import Data.Graph.Inductive (Gr, Node, NodeMap, mkMapGraph, mkNode, lab, out, sp) @@ -38,7 +36,7 @@ import qualified Data.Map as M import Data.Maybe import qualified Data.Text as T -import Data.Time.Calendar (Day) +import Data.Time.Calendar (Day, fromGregorian) import Data.MemoUgly (memo) import GHC.Generics (Generic) import Safe (headMay) @@ -46,7 +44,6 @@ import Hledger.Utils import Hledger.Data.Types import Hledger.Data.Amount -import Hledger.Data.Dates (parsedate) ------------------------------------------------------------------------------ @@ -61,7 +58,7 @@ | AtNow (Maybe CommoditySymbol) -- ^ convert to default or given valuation commodity, using current market prices | AtDate Day (Maybe CommoditySymbol) -- ^ convert to default or given valuation commodity, using market prices on some date | AtDefault (Maybe CommoditySymbol) -- ^ works like AtNow in single period reports, like AtEnd in multiperiod reports - deriving (Show,Data,Eq) -- Typeable + deriving (Show,Eq) -- | A snapshot of the known exchange rates between commodity pairs at a given date, -- as a graph allowing fast lookup and path finding, along with some helper data. @@ -88,8 +85,6 @@ } deriving (Show,Generic) -instance NFData PriceGraph - -- | A price oracle is a magic memoising function that efficiently -- looks up market prices (exchange rates) from one commodity to -- another (or if unspecified, to a default valuation commodity) on a @@ -167,7 +162,7 @@ case v of AtCost Nothing -> styleAmount styles $ amountCost a AtCost mc -> amountValueAtDate priceoracle styles mc periodlast $ styleAmount styles $ amountCost a - AtThen _mc -> error' unsupportedValueThenError -- TODO + AtThen _mc -> error' unsupportedValueThenError -- PARTIAL: -- amountValueAtDate priceoracle styles mc periodlast a -- posting date unknown, handle like AtEnd AtEnd mc -> amountValueAtDate priceoracle styles mc periodlast a AtNow mc -> amountValueAtDate priceoracle styles mc today a @@ -258,6 +253,8 @@ Nothing -> Nothing Just nodes -> dbg ("market price for "++intercalate " -> " (map T.unpack comms)) $ + -- TODO: it would be nice to include price date as part of the label + -- in PriceGraph, so we could show the dates of market prices here Just $ product $ pathEdgeLabels g nodes -- convert to a single exchange rate where comms = catMaybes $ map (lab g) nodes @@ -266,21 +263,20 @@ tests_priceLookup = let - d = parsedate - p date from q to = MarketPrice{mpdate=d date, mpfrom=from, mpto=to, mprate=q} + p y m d from q to = MarketPrice{mpdate=fromGregorian y m d, mpfrom=from, mpto=to, mprate=q} ps1 = [ - p "2000/01/01" "A" 10 "B" - ,p "2000/01/01" "B" 10 "C" - ,p "2000/01/01" "C" 10 "D" - ,p "2000/01/01" "E" 2 "D" - ,p "2001/01/01" "A" 11 "B" + p 2000 01 01 "A" 10 "B" + ,p 2000 01 01 "B" 10 "C" + ,p 2000 01 01 "C" 10 "D" + ,p 2000 01 01 "E" 2 "D" + ,p 2001 01 01 "A" 11 "B" ] makepricegraph = makePriceGraph ps1 [] in test "priceLookup" $ do - priceLookup makepricegraph (d "1999/01/01") "A" Nothing @?= Nothing - priceLookup makepricegraph (d "2000/01/01") "A" Nothing @?= Just ("B",10) - priceLookup makepricegraph (d "2000/01/01") "B" (Just "A") @?= Just ("A",0.1) - priceLookup makepricegraph (d "2000/01/01") "A" (Just "E") @?= Just ("E",500) + priceLookup makepricegraph (fromGregorian 1999 01 01) "A" Nothing @?= Nothing + priceLookup makepricegraph (fromGregorian 2000 01 01) "A" Nothing @?= Just ("B",10) + priceLookup makepricegraph (fromGregorian 2000 01 01) "B" (Just "A") @?= Just ("A",0.1) + priceLookup makepricegraph (fromGregorian 2000 01 01) "A" (Just "E") @?= Just ("E",500) -- | Build the graph of commodity conversion prices for a given day. -- Converts a list of declared market prices in parse order, and a @@ -400,7 +396,7 @@ -- lowest-sorting label is used. pathEdgeLabels :: (Show b, Ord b) => Gr a b -> [Node] -> [b] pathEdgeLabels g = map frommaybe . map (nodesEdgeLabel g) . pathEdges - where frommaybe = fromMaybe (error' "pathEdgeLabels: expected no Nothings here") + where frommaybe = fromMaybe (error' "pathEdgeLabels: expected no Nothings here") -- PARTIAL: -- | Convert a path to node pairs representing the path's edges. pathEdges :: [Node] -> [(Node,Node)] diff -Nru haskell-hledger-lib-1.18.1/Hledger/Query.hs haskell-hledger-lib-1.19.1/Hledger/Query.hs --- haskell-hledger-lib-1.18.1/Hledger/Query.hs 2020-06-06 22:02:27.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Query.hs 2020-09-01 17:33:33.000000000 +0000 @@ -9,13 +9,18 @@ -- (may hide other deprecation warnings too). https://github.com/ndmitchell/safe/issues/26 {-# OPTIONS_GHC -Wno-warnings-deprecations #-} -{-# LANGUAGE DeriveDataTypeable, OverloadedStrings, ViewPatterns #-} -{-# LANGUAGE CPP #-} +{-# LANGUAGE CPP #-} +{-# LANGUAGE FlexibleContexts #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE ViewPatterns #-} module Hledger.Query ( -- * Query and QueryOpt Query(..), QueryOpt(..), + payeeTag, + noteTag, + generatedTransactionTag, -- * parsing parseQuery, simplifyQuery, @@ -47,6 +52,7 @@ matchesMixedAmount, matchesAmount, matchesCommodity, + matchesTags, matchesPriceDirective, words'', prefixes, @@ -55,18 +61,18 @@ ) where -import Data.Data -import Data.Either -import Data.List -import Data.Maybe +import Control.Applicative ((<|>), many, optional) +import Data.Either (partitionEithers) +import Data.List (partition) +import Data.Maybe (fromMaybe, isJust, mapMaybe) #if !(MIN_VERSION_base(4,11,0)) import Data.Monoid ((<>)) #endif import qualified Data.Text as T -import Data.Time.Calendar -import Safe (readDef, maximumByDef, maximumDef, minimumDef) -import Text.Megaparsec -import Text.Megaparsec.Char +import Data.Time.Calendar (Day, fromGregorian ) +import Safe (readDef, readMay, maximumByMay, maximumMay, minimumMay) +import Text.Megaparsec (between, noneOf, sepBy) +import Text.Megaparsec.Char (char, string) import Hledger.Utils hiding (words') import Hledger.Data.Types @@ -100,39 +106,31 @@ -- and sometimes like a query option (for controlling display) | Tag Regexp (Maybe Regexp) -- ^ match if a tag's name, and optionally its value, is matched by these respective regexps -- matching the regexp if provided, exists - deriving (Eq,Data,Typeable) + deriving (Eq,Show) --- custom Show implementation to show strings more accurately, eg for debugging regexps -instance Show Query where - show Any = "Any" - show None = "None" - show (Not q) = "Not (" ++ show q ++ ")" - show (Or qs) = "Or (" ++ show qs ++ ")" - show (And qs) = "And (" ++ show qs ++ ")" - show (Code r) = "Code " ++ show r - show (Desc r) = "Desc " ++ show r - show (Acct r) = "Acct " ++ show r - show (Date ds) = "Date (" ++ show ds ++ ")" - show (Date2 ds) = "Date2 (" ++ show ds ++ ")" - show (StatusQ b) = "StatusQ " ++ show b - show (Real b) = "Real " ++ show b - show (Amt ord qty) = "Amt " ++ show ord ++ " " ++ show qty - show (Sym r) = "Sym " ++ show r - show (Empty b) = "Empty " ++ show b - show (Depth n) = "Depth " ++ show n - show (Tag s ms) = "Tag " ++ show s ++ " (" ++ show ms ++ ")" +-- | Construct a payee tag +payeeTag :: Maybe String -> Either RegexError Query +payeeTag = fmap (Tag (toRegexCI' "payee")) . maybe (pure Nothing) (fmap Just . toRegexCI) + +-- | Construct a note tag +noteTag :: Maybe String -> Either RegexError Query +noteTag = fmap (Tag (toRegexCI' "note")) . maybe (pure Nothing) (fmap Just . toRegexCI) + +-- | Construct a generated-transaction tag +generatedTransactionTag :: Query +generatedTransactionTag = Tag (toRegexCI' "generated-transaction") Nothing -- | A more expressive Ord, used for amt: queries. The Abs* variants -- compare with the absolute value of a number, ignoring sign. data OrdPlus = Lt | LtEq | Gt | GtEq | Eq | AbsLt | AbsLtEq | AbsGt | AbsGtEq | AbsEq - deriving (Show,Eq,Data,Typeable) + deriving (Show,Eq) -- | A query option changes a query's/report's behaviour and output in some way. data QueryOpt = QueryOptInAcctOnly AccountName -- ^ show an account register focussed on this account | QueryOptInAcct AccountName -- ^ as above but include sub-accounts in the account register -- | QueryOptCostBasis -- ^ show amounts converted to cost where possible -- | QueryOptDate2 -- ^ show secondary dates instead of primary dates - deriving (Show, Eq, Data, Typeable) + deriving (Show, Eq) -- parsing @@ -143,8 +141,11 @@ -- showAccountMatcher _ = Nothing --- | Convert a query expression containing zero or more space-separated --- terms to a query and zero or more query options. A query term is either: +-- | Convert a query expression containing zero or more +-- space-separated terms to a query and zero or more query options; or +-- return an error message if query parsing fails. +-- +-- A query term is either: -- -- 1. a search pattern, which matches on one or more fields, eg: -- @@ -174,19 +175,20 @@ -- 4. then all terms are AND'd together -- -- >>> parseQuery nulldate "expenses:dining out" --- (Or ([Acct "expenses:dining",Acct "out"]),[]) --- >>> parseQuery nulldate "\"expenses:dining out\"" --- (Acct "expenses:dining out",[]) +-- Right (Or [Acct (RegexpCI "expenses:dining"),Acct (RegexpCI "out")],[]) -- -parseQuery :: Day -> T.Text -> (Query,[QueryOpt]) -parseQuery d s = (q, opts) - where - terms = words'' prefixes s - (pats, opts) = partitionEithers $ map (parseQueryTerm d) terms - (descpats, pats') = partition queryIsDesc pats - (acctpats, pats'') = partition queryIsAcct pats' - (statuspats, otherpats) = partition queryIsStatus pats'' - q = simplifyQuery $ And $ [Or acctpats, Or descpats, Or statuspats] ++ otherpats +-- >>> parseQuery nulldate "\"expenses:dining out\"" +-- Right (Acct (RegexpCI "expenses:dining out"),[]) +parseQuery :: Day -> T.Text -> Either String (Query,[QueryOpt]) +parseQuery d s = do + let termstrs = words'' prefixes s + eterms <- sequence $ map (parseQueryTerm d) termstrs + let (pats, opts) = partitionEithers eterms + (descpats, pats') = partition queryIsDesc pats + (acctpats, pats'') = partition queryIsAcct pats' + (statuspats, otherpats) = partition queryIsStatus pats'' + q = simplifyQuery $ And $ [Or acctpats, Or descpats, Or statuspats] ++ otherpats + Right (q, opts) -- XXX -- | Quote-and-prefix-aware version of words - don't split on spaces which @@ -196,7 +198,7 @@ words'' prefixes = fromparse . parsewith maybeprefixedquotedphrases -- XXX where maybeprefixedquotedphrases :: SimpleTextParser [T.Text] - maybeprefixedquotedphrases = choice' [prefixedQuotedPattern, singleQuotedPattern, doubleQuotedPattern, pattern] `sepBy` skipSome spacenonewline + maybeprefixedquotedphrases = choice' [prefixedQuotedPattern, singleQuotedPattern, doubleQuotedPattern, pattern] `sepBy` skipNonNewlineSpaces1 prefixedQuotedPattern :: SimpleTextParser T.Text prefixedQuotedPattern = do not' <- fromMaybe "" `fmap` (optional $ string "not:") @@ -250,81 +252,92 @@ -- query = undefined -- | Parse a single query term as either a query or a query option, --- or raise an error if it has invalid syntax. -parseQueryTerm :: Day -> T.Text -> Either Query QueryOpt -parseQueryTerm _ (T.stripPrefix "inacctonly:" -> Just s) = Right $ QueryOptInAcctOnly s -parseQueryTerm _ (T.stripPrefix "inacct:" -> Just s) = Right $ QueryOptInAcct s +-- or return an error message if parsing fails. +parseQueryTerm :: Day -> T.Text -> Either String (Either Query QueryOpt) +parseQueryTerm _ (T.stripPrefix "inacctonly:" -> Just s) = Right $ Right $ QueryOptInAcctOnly s +parseQueryTerm _ (T.stripPrefix "inacct:" -> Just s) = Right $ Right $ QueryOptInAcct s parseQueryTerm d (T.stripPrefix "not:" -> Just s) = case parseQueryTerm d s of - Left m -> Left $ Not m - Right _ -> Left Any -- not:somequeryoption will be ignored -parseQueryTerm _ (T.stripPrefix "code:" -> Just s) = Left $ Code $ T.unpack s -parseQueryTerm _ (T.stripPrefix "desc:" -> Just s) = Left $ Desc $ T.unpack s -parseQueryTerm _ (T.stripPrefix "payee:" -> Just s) = Left $ Tag "payee" $ Just $ T.unpack s -parseQueryTerm _ (T.stripPrefix "note:" -> Just s) = Left $ Tag "note" $ Just $ T.unpack s -parseQueryTerm _ (T.stripPrefix "acct:" -> Just s) = Left $ Acct $ T.unpack s + Right (Left m) -> Right $ Left $ Not m + Right (Right _) -> Right $ Left Any -- not:somequeryoption will be ignored + Left err -> Left err +parseQueryTerm _ (T.stripPrefix "code:" -> Just s) = Left . Code <$> toRegexCI (T.unpack s) +parseQueryTerm _ (T.stripPrefix "desc:" -> Just s) = Left . Desc <$> toRegexCI (T.unpack s) +parseQueryTerm _ (T.stripPrefix "payee:" -> Just s) = Left <$> payeeTag (Just $ T.unpack s) +parseQueryTerm _ (T.stripPrefix "note:" -> Just s) = Left <$> noteTag (Just $ T.unpack s) +parseQueryTerm _ (T.stripPrefix "acct:" -> Just s) = Left . Acct <$> toRegexCI (T.unpack s) parseQueryTerm d (T.stripPrefix "date2:" -> Just s) = - case parsePeriodExpr d s of Left e -> error' $ "\"date2:"++T.unpack s++"\" gave a "++showDateParseError e - Right (_,span) -> Left $ Date2 span + case parsePeriodExpr d s of Left e -> Left $ "\"date2:"++T.unpack s++"\" gave a "++showDateParseError e + Right (_,span) -> Right $ Left $ Date2 span parseQueryTerm d (T.stripPrefix "date:" -> Just s) = - case parsePeriodExpr d s of Left e -> error' $ "\"date:"++T.unpack s++"\" gave a "++showDateParseError e - Right (_,span) -> Left $ Date span + case parsePeriodExpr d s of Left e -> Left $ "\"date:"++T.unpack s++"\" gave a "++showDateParseError e + Right (_,span) -> Right $ Left $ Date span parseQueryTerm _ (T.stripPrefix "status:" -> Just s) = - case parseStatus s of Left e -> error' $ "\"status:"++T.unpack s++"\" gave a parse error: " ++ e - Right st -> Left $ StatusQ st -parseQueryTerm _ (T.stripPrefix "real:" -> Just s) = Left $ Real $ parseBool s || T.null s -parseQueryTerm _ (T.stripPrefix "amt:" -> Just s) = Left $ Amt ord q where (ord, q) = parseAmountQueryTerm s -parseQueryTerm _ (T.stripPrefix "empty:" -> Just s) = Left $ Empty $ parseBool s + case parseStatus s of Left e -> Left $ "\"status:"++T.unpack s++"\" gave a parse error: " ++ e + Right st -> Right $ Left $ StatusQ st +parseQueryTerm _ (T.stripPrefix "real:" -> Just s) = Right $ Left $ Real $ parseBool s || T.null s +parseQueryTerm _ (T.stripPrefix "amt:" -> Just s) = Right $ Left $ Amt ord q where (ord, q) = either error id $ parseAmountQueryTerm s -- PARTIAL: +parseQueryTerm _ (T.stripPrefix "empty:" -> Just s) = Right $ Left $ Empty $ parseBool s parseQueryTerm _ (T.stripPrefix "depth:" -> Just s) - | n >= 0 = Left $ Depth n - | otherwise = error' "depth: should have a positive number" + | n >= 0 = Right $ Left $ Depth n + | otherwise = Left "depth: should have a positive number" where n = readDef 0 (T.unpack s) -parseQueryTerm _ (T.stripPrefix "cur:" -> Just s) = Left $ Sym (T.unpack s) -- support cur: as an alias -parseQueryTerm _ (T.stripPrefix "tag:" -> Just s) = Left $ Tag n v where (n,v) = parseTag s -parseQueryTerm _ "" = Left $ Any +parseQueryTerm _ (T.stripPrefix "cur:" -> Just s) = Left . Sym <$> toRegexCI ('^' : T.unpack s ++ "$") -- support cur: as an alias +parseQueryTerm _ (T.stripPrefix "tag:" -> Just s) = Left <$> parseTag s +parseQueryTerm _ "" = Right $ Left $ Any parseQueryTerm d s = parseQueryTerm d $ defaultprefix<>":"<>s --- | Parse what comes after amt: . -parseAmountQueryTerm :: T.Text -> (OrdPlus, Quantity) -parseAmountQueryTerm s' = - case s' of - -- feel free to do this a smarter way - "" -> err - (T.stripPrefix "<+" -> Just s) -> (Lt, readDef err (T.unpack s)) - (T.stripPrefix "<=+" -> Just s) -> (LtEq, readDef err (T.unpack s)) - (T.stripPrefix ">+" -> Just s) -> (Gt, readDef err (T.unpack s)) - (T.stripPrefix ">=+" -> Just s) -> (GtEq, readDef err (T.unpack s)) - (T.stripPrefix "=+" -> Just s) -> (Eq, readDef err (T.unpack s)) - (T.stripPrefix "+" -> Just s) -> (Eq, readDef err (T.unpack s)) - (T.stripPrefix "<-" -> Just s) -> (Lt, negate $ readDef err (T.unpack s)) - (T.stripPrefix "<=-" -> Just s) -> (LtEq, negate $ readDef err (T.unpack s)) - (T.stripPrefix ">-" -> Just s) -> (Gt, negate $ readDef err (T.unpack s)) - (T.stripPrefix ">=-" -> Just s) -> (GtEq, negate $ readDef err (T.unpack s)) - (T.stripPrefix "=-" -> Just s) -> (Eq, negate $ readDef err (T.unpack s)) - (T.stripPrefix "-" -> Just s) -> (Eq, negate $ readDef err (T.unpack s)) - (T.stripPrefix "<=" -> Just s) -> let n = readDef err (T.unpack s) in - case n of - 0 -> (LtEq, 0) - _ -> (AbsLtEq, n) - (T.stripPrefix "<" -> Just s) -> let n = readDef err (T.unpack s) in - case n of 0 -> (Lt, 0) - _ -> (AbsLt, n) - (T.stripPrefix ">=" -> Just s) -> let n = readDef err (T.unpack s) in - case n of 0 -> (GtEq, 0) - _ -> (AbsGtEq, n) - (T.stripPrefix ">" -> Just s) -> let n = readDef err (T.unpack s) in - case n of 0 -> (Gt, 0) - _ -> (AbsGt, n) - (T.stripPrefix "=" -> Just s) -> (AbsEq, readDef err (T.unpack s)) - s -> (AbsEq, readDef err (T.unpack s)) +-- | Parse the argument of an amt query term ([OP][SIGN]NUM), to an +-- OrdPlus and a Quantity, or if parsing fails, an error message. OP +-- can be <=, <, >=, >, or = . NUM can be a simple integer or decimal. +-- If a decimal, the decimal mark must be period, and it must have +-- digits preceding it. Digit group marks are not allowed. +parseAmountQueryTerm :: T.Text -> Either String (OrdPlus, Quantity) +parseAmountQueryTerm amtarg = + case amtarg of + -- number has a + sign, do a signed comparison + (parse "<=+" -> Just q) -> Right (LtEq ,q) + (parse "<+" -> Just q) -> Right (Lt ,q) + (parse ">=+" -> Just q) -> Right (GtEq ,q) + (parse ">+" -> Just q) -> Right (Gt ,q) + (parse "=+" -> Just q) -> Right (Eq ,q) + (parse "+" -> Just q) -> Right (Eq ,q) + -- number has a - sign, do a signed comparison + (parse "<-" -> Just q) -> Right (Lt ,-q) + (parse "<=-" -> Just q) -> Right (LtEq ,-q) + (parse ">-" -> Just q) -> Right (Gt ,-q) + (parse ">=-" -> Just q) -> Right (GtEq ,-q) + (parse "=-" -> Just q) -> Right (Eq ,-q) + (parse "-" -> Just q) -> Right (Eq ,-q) + -- number is unsigned and zero, do a signed comparison (more useful) + (parse "<=" -> Just 0) -> Right (LtEq ,0) + (parse "<" -> Just 0) -> Right (Lt ,0) + (parse ">=" -> Just 0) -> Right (GtEq ,0) + (parse ">" -> Just 0) -> Right (Gt ,0) + -- number is unsigned and non-zero, do an absolute magnitude comparison + (parse "<=" -> Just q) -> Right (AbsLtEq ,q) + (parse "<" -> Just q) -> Right (AbsLt ,q) + (parse ">=" -> Just q) -> Right (AbsGtEq ,q) + (parse ">" -> Just q) -> Right (AbsGt ,q) + (parse "=" -> Just q) -> Right (AbsEq ,q) + (parse "" -> Just q) -> Right (AbsEq ,q) + _ -> Left $ + "could not parse as a comparison operator followed by an optionally-signed number: " + ++ T.unpack amtarg where - err = error' $ "could not parse as '=', '<', or '>' (optional) followed by a (optionally signed) numeric quantity: " ++ T.unpack s' - -parseTag :: T.Text -> (Regexp, Maybe Regexp) -parseTag s | "=" `T.isInfixOf` s = (T.unpack n, Just $ tail $ T.unpack v) - | otherwise = (T.unpack s, Nothing) - where (n,v) = T.break (=='=') s + -- Strip outer whitespace from the text, require and remove the + -- specified prefix, remove all whitespace from the remainder, and + -- read it as a simple integer or decimal if possible. + parse :: T.Text -> T.Text -> Maybe Quantity + parse p s = (T.stripPrefix p . T.strip) s >>= readMay . filter (not.(==' ')) . T.unpack + +parseTag :: T.Text -> Either RegexError Query +parseTag s = do + tag <- toRegexCI . T.unpack $ if T.null v then s else n + body <- if T.null v then pure Nothing else Just <$> toRegexCI (tail $ T.unpack v) + return $ Tag tag body + where (n,v) = T.break (=='=') s -- | Parse the value part of a "status:" query, or return an error. parseStatus :: T.Text -> Either String Status @@ -488,34 +501,33 @@ -- | What is the earliest of these dates, where Nothing is earliest ? earliestMaybeDate :: [Maybe Day] -> Maybe Day -earliestMaybeDate = minimumDef Nothing +earliestMaybeDate = fromMaybe Nothing . minimumMay -- | What is the latest of these dates, where Nothing is earliest ? latestMaybeDate :: [Maybe Day] -> Maybe Day -latestMaybeDate = maximumDef Nothing +latestMaybeDate = fromMaybe Nothing . maximumMay -- | What is the earliest of these dates, where Nothing is the latest ? earliestMaybeDate' :: [Maybe Day] -> Maybe Day -earliestMaybeDate' = minimumDef Nothing . filter isJust +earliestMaybeDate' = fromMaybe Nothing . minimumMay . filter isJust -- | What is the latest of these dates, where Nothing is the latest ? latestMaybeDate' :: [Maybe Day] -> Maybe Day -latestMaybeDate' = maximumByDef Nothing compareNothingMax +latestMaybeDate' = fromMaybe Nothing . maximumByMay compareNothingMax where compareNothingMax Nothing Nothing = EQ compareNothingMax (Just _) Nothing = LT compareNothingMax Nothing (Just _) = GT compareNothingMax (Just a) (Just b) = compare a b --- | The depth limit this query specifies, or a large number if none. -queryDepth :: Query -> Int -queryDepth q = case queryDepth' q of [] -> 99999 - ds -> minimum ds +-- | The depth limit this query specifies, if it has one +queryDepth :: Query -> Maybe Int +queryDepth = minimumMay . queryDepth' where queryDepth' (Depth d) = [d] - queryDepth' (Or qs) = concatMap queryDepth' qs - queryDepth' (And qs) = concatMap queryDepth' qs - queryDepth' _ = [] + queryDepth' (Or qs) = concatMap queryDepth' qs + queryDepth' (And qs) = concatMap queryDepth' qs + queryDepth' _ = [] -- | The account we are currently focussed on, if any, and whether subaccounts are included. -- Just looks at the first query option. @@ -528,8 +540,8 @@ -- Just looks at the first query option. inAccountQuery :: [QueryOpt] -> Maybe Query inAccountQuery [] = Nothing -inAccountQuery (QueryOptInAcctOnly a : _) = Just $ Acct $ accountNameToAccountOnlyRegex a -inAccountQuery (QueryOptInAcct a : _) = Just $ Acct $ accountNameToAccountRegex a +inAccountQuery (QueryOptInAcctOnly a : _) = Just . Acct $ accountNameToAccountOnlyRegex a +inAccountQuery (QueryOptInAcct a : _) = Just . Acct $ accountNameToAccountRegex a -- -- | Convert a query to its inverse. -- negateQuery :: Query -> Query @@ -539,12 +551,14 @@ -- | Does the match expression match this account ? -- A matching in: clause is also considered a match. +-- When matching by account name pattern, if there's a regular +-- expression error, this function calls error. matchesAccount :: Query -> AccountName -> Bool matchesAccount (None) _ = False matchesAccount (Not m) a = not $ matchesAccount m a matchesAccount (Or ms) a = any (`matchesAccount` a) ms matchesAccount (And ms) a = all (`matchesAccount` a) ms -matchesAccount (Acct r) a = regexMatchesCI r (T.unpack a) -- XXX pack +matchesAccount (Acct r) a = regexMatch r $ T.unpack a -- XXX pack matchesAccount (Depth d) a = accountNameLevel a <= d matchesAccount (Tag _ _) _ = False matchesAccount _ _ = True @@ -554,8 +568,8 @@ matchesMixedAmount q (Mixed as) = any (q `matchesAmount`) as matchesCommodity :: Query -> CommoditySymbol -> Bool -matchesCommodity (Sym r) s = regexMatchesCI ("^" ++ r ++ "$") (T.unpack s) -matchesCommodity _ _ = True +matchesCommodity (Sym r) = regexMatch r . T.unpack +matchesCommodity _ = const True -- | Does the match expression match this (simple) amount ? matchesAmount :: Query -> Amount -> Bool @@ -564,10 +578,8 @@ matchesAmount (None) _ = False matchesAmount (Or qs) a = any (`matchesAmount` a) qs matchesAmount (And qs) a = all (`matchesAmount` a) qs --- matchesAmount (Amt ord n) a = compareAmount ord n a matchesAmount (Sym r) a = matchesCommodity (Sym r) (acommodity a) --- matchesAmount _ _ = True -- | Is this simple (single-amount) mixed amount's quantity less than, greater than, equal to, or unsignedly equal to this number ? @@ -595,10 +607,10 @@ matchesPosting (None) _ = False matchesPosting (Or qs) p = any (`matchesPosting` p) qs matchesPosting (And qs) p = all (`matchesPosting` p) qs -matchesPosting (Code r) p = regexMatchesCI r $ maybe "" (T.unpack . tcode) $ ptransaction p -matchesPosting (Desc r) p = regexMatchesCI r $ maybe "" (T.unpack . tdescription) $ ptransaction p -matchesPosting (Acct r) p = matchesPosting p || matchesPosting (originalPosting p) - where matchesPosting p = regexMatchesCI r $ T.unpack $ paccount p -- XXX pack +matchesPosting (Code r) p = regexMatch r $ maybe "" (T.unpack . tcode) $ ptransaction p +matchesPosting (Desc r) p = regexMatch r $ maybe "" (T.unpack . tdescription) $ ptransaction p +matchesPosting (Acct r) p = matches p || matches (originalPosting p) + where matches p = regexMatch r . T.unpack $ paccount p -- XXX pack matchesPosting (Date span) p = span `spanContainsDate` postingDate p matchesPosting (Date2 span) p = span `spanContainsDate` postingDate2 p matchesPosting (StatusQ s) p = postingStatus p == s @@ -611,10 +623,10 @@ -- matchesPosting (Empty True) Posting{pamount=a} = mixedAmountLooksZero a matchesPosting (Empty _) _ = True matchesPosting (Sym r) Posting{pamount=Mixed as} = any (matchesCommodity (Sym r)) $ map acommodity as -matchesPosting (Tag n v) p = case (n, v) of - ("payee", Just v) -> maybe False (regexMatchesCI v . T.unpack . transactionPayee) $ ptransaction p - ("note", Just v) -> maybe False (regexMatchesCI v . T.unpack . transactionNote) $ ptransaction p - (n, v) -> matchesTags n v $ postingAllTags p +matchesPosting (Tag n v) p = case (reString n, v) of + ("payee", Just v) -> maybe False (regexMatch v . T.unpack . transactionPayee) $ ptransaction p + ("note", Just v) -> maybe False (regexMatch v . T.unpack . transactionNote) $ ptransaction p + (_, v) -> matchesTags n v $ postingAllTags p -- | Does the match expression match this transaction ? matchesTransaction :: Query -> Transaction -> Bool @@ -623,8 +635,8 @@ matchesTransaction (None) _ = False matchesTransaction (Or qs) t = any (`matchesTransaction` t) qs matchesTransaction (And qs) t = all (`matchesTransaction` t) qs -matchesTransaction (Code r) t = regexMatchesCI r $ T.unpack $ tcode t -matchesTransaction (Desc r) t = regexMatchesCI r $ T.unpack $ tdescription t +matchesTransaction (Code r) t = regexMatch r $ T.unpack $ tcode t +matchesTransaction (Desc r) t = regexMatch r $ T.unpack $ tdescription t matchesTransaction q@(Acct _) t = any (q `matchesPosting`) $ tpostings t matchesTransaction (Date span) t = spanContainsDate span $ tdate t matchesTransaction (Date2 span) t = spanContainsDate span $ transactionDate2 t @@ -634,18 +646,16 @@ matchesTransaction (Empty _) _ = True matchesTransaction (Depth d) t = any (Depth d `matchesPosting`) $ tpostings t matchesTransaction q@(Sym _) t = any (q `matchesPosting`) $ tpostings t -matchesTransaction (Tag n v) t = case (n, v) of - ("payee", Just v) -> regexMatchesCI v . T.unpack . transactionPayee $ t - ("note", Just v) -> regexMatchesCI v . T.unpack . transactionNote $ t - (n, v) -> matchesTags n v $ transactionAllTags t +matchesTransaction (Tag n v) t = case (reString n, v) of + ("payee", Just v) -> regexMatch v . T.unpack . transactionPayee $ t + ("note", Just v) -> regexMatch v . T.unpack . transactionNote $ t + (_, v) -> matchesTags n v $ transactionAllTags t --- | Filter a list of tags by matching against their names and --- optionally also their values. +-- | Does the query match the name and optionally the value of any of these tags ? matchesTags :: Regexp -> Maybe Regexp -> [Tag] -> Bool -matchesTags namepat valuepat = not . null . filter (match namepat valuepat) +matchesTags namepat valuepat = not . null . filter (matches namepat valuepat) where - match npat Nothing (n,_) = regexMatchesCI npat (T.unpack n) -- XXX - match npat (Just vpat) (n,v) = regexMatchesCI npat (T.unpack n) && regexMatchesCI vpat (T.unpack v) + matches npat vpat (n,v) = regexMatch npat (T.unpack n) && maybe (const True) regexMatch vpat (T.unpack v) -- | Does the query match this market price ? matchesPriceDirective :: Query -> PriceDirective -> Bool @@ -663,23 +673,23 @@ tests_Query = tests "Query" [ test "simplifyQuery" $ do - (simplifyQuery $ Or [Acct "a"]) @?= (Acct "a") + (simplifyQuery $ Or [Acct $ toRegex' "a"]) @?= (Acct $ toRegex' "a") (simplifyQuery $ Or [Any,None]) @?= (Any) (simplifyQuery $ And [Any,None]) @?= (None) (simplifyQuery $ And [Any,Any]) @?= (Any) - (simplifyQuery $ And [Acct "b",Any]) @?= (Acct "b") + (simplifyQuery $ And [Acct $ toRegex' "b",Any]) @?= (Acct $ toRegex' "b") (simplifyQuery $ And [Any,And [Date (DateSpan Nothing Nothing)]]) @?= (Any) - (simplifyQuery $ And [Date (DateSpan Nothing (Just $ parsedate "2013-01-01")), Date (DateSpan (Just $ parsedate "2012-01-01") Nothing)]) - @?= (Date (DateSpan (Just $ parsedate "2012-01-01") (Just $ parsedate "2013-01-01"))) - (simplifyQuery $ And [Or [],Or [Desc "b b"]]) @?= (Desc "b b") + (simplifyQuery $ And [Date (DateSpan Nothing (Just $ fromGregorian 2013 01 01)), Date (DateSpan (Just $ fromGregorian 2012 01 01) Nothing)]) + @?= (Date (DateSpan (Just $ fromGregorian 2012 01 01) (Just $ fromGregorian 2013 01 01))) + (simplifyQuery $ And [Or [],Or [Desc $ toRegex' "b b"]]) @?= (Desc $ toRegex' "b b") ,test "parseQuery" $ do - (parseQuery nulldate "acct:'expenses:autres d\233penses' desc:b") @?= (And [Acct "expenses:autres d\233penses", Desc "b"], []) - parseQuery nulldate "inacct:a desc:\"b b\"" @?= (Desc "b b", [QueryOptInAcct "a"]) - parseQuery nulldate "inacct:a inacct:b" @?= (Any, [QueryOptInAcct "a", QueryOptInAcct "b"]) - parseQuery nulldate "desc:'x x'" @?= (Desc "x x", []) - parseQuery nulldate "'a a' 'b" @?= (Or [Acct "a a",Acct "'b"], []) - parseQuery nulldate "\"" @?= (Acct "\"", []) + (parseQuery nulldate "acct:'expenses:autres d\233penses' desc:b") @?= Right (And [Acct $ toRegexCI' "expenses:autres d\233penses", Desc $ toRegexCI' "b"], []) + parseQuery nulldate "inacct:a desc:\"b b\"" @?= Right (Desc $ toRegexCI' "b b", [QueryOptInAcct "a"]) + parseQuery nulldate "inacct:a inacct:b" @?= Right (Any, [QueryOptInAcct "a", QueryOptInAcct "b"]) + parseQuery nulldate "desc:'x x'" @?= Right (Desc $ toRegexCI' "x x", []) + parseQuery nulldate "'a a' 'b" @?= Right (Or [Acct $ toRegexCI' "a a",Acct $ toRegexCI' "'b"], []) + parseQuery nulldate "\"" @?= Right (Acct $ toRegexCI' "\"", []) ,test "words''" $ do (words'' [] "a b") @?= ["a","b"] @@ -698,35 +708,37 @@ filterQuery queryIsDepth (And [Date nulldatespan, Not (Or [Any, Depth 1])]) @?= Any -- XXX unclear ,test "parseQueryTerm" $ do - parseQueryTerm nulldate "a" @?= (Left $ Acct "a") - parseQueryTerm nulldate "acct:expenses:autres d\233penses" @?= (Left $ Acct "expenses:autres d\233penses") - parseQueryTerm nulldate "not:desc:a b" @?= (Left $ Not $ Desc "a b") - parseQueryTerm nulldate "status:1" @?= (Left $ StatusQ Cleared) - parseQueryTerm nulldate "status:*" @?= (Left $ StatusQ Cleared) - parseQueryTerm nulldate "status:!" @?= (Left $ StatusQ Pending) - parseQueryTerm nulldate "status:0" @?= (Left $ StatusQ Unmarked) - parseQueryTerm nulldate "status:" @?= (Left $ StatusQ Unmarked) - parseQueryTerm nulldate "payee:x" @?= (Left $ Tag "payee" (Just "x")) - parseQueryTerm nulldate "note:x" @?= (Left $ Tag "note" (Just "x")) - parseQueryTerm nulldate "real:1" @?= (Left $ Real True) - parseQueryTerm nulldate "date:2008" @?= (Left $ Date $ DateSpan (Just $ parsedate "2008/01/01") (Just $ parsedate "2009/01/01")) - parseQueryTerm nulldate "date:from 2012/5/17" @?= (Left $ Date $ DateSpan (Just $ parsedate "2012/05/17") Nothing) - parseQueryTerm nulldate "date:20180101-201804" @?= (Left $ Date $ DateSpan (Just $ parsedate "2018/01/01") (Just $ parsedate "2018/04/01")) - parseQueryTerm nulldate "inacct:a" @?= (Right $ QueryOptInAcct "a") - parseQueryTerm nulldate "tag:a" @?= (Left $ Tag "a" Nothing) - parseQueryTerm nulldate "tag:a=some value" @?= (Left $ Tag "a" (Just "some value")) - parseQueryTerm nulldate "amt:<0" @?= (Left $ Amt Lt 0) - parseQueryTerm nulldate "amt:>10000.10" @?= (Left $ Amt AbsGt 10000.1) + parseQueryTerm nulldate "a" @?= Right (Left $ Acct $ toRegexCI' "a") + parseQueryTerm nulldate "acct:expenses:autres d\233penses" @?= Right (Left $ Acct $ toRegexCI' "expenses:autres d\233penses") + parseQueryTerm nulldate "not:desc:a b" @?= Right (Left $ Not $ Desc $ toRegexCI' "a b") + parseQueryTerm nulldate "status:1" @?= Right (Left $ StatusQ Cleared) + parseQueryTerm nulldate "status:*" @?= Right (Left $ StatusQ Cleared) + parseQueryTerm nulldate "status:!" @?= Right (Left $ StatusQ Pending) + parseQueryTerm nulldate "status:0" @?= Right (Left $ StatusQ Unmarked) + parseQueryTerm nulldate "status:" @?= Right (Left $ StatusQ Unmarked) + parseQueryTerm nulldate "payee:x" @?= Left <$> payeeTag (Just "x") + parseQueryTerm nulldate "note:x" @?= Left <$> noteTag (Just "x") + parseQueryTerm nulldate "real:1" @?= Right (Left $ Real True) + parseQueryTerm nulldate "date:2008" @?= Right (Left $ Date $ DateSpan (Just $ fromGregorian 2008 01 01) (Just $ fromGregorian 2009 01 01)) + parseQueryTerm nulldate "date:from 2012/5/17" @?= Right (Left $ Date $ DateSpan (Just $ fromGregorian 2012 05 17) Nothing) + parseQueryTerm nulldate "date:20180101-201804" @?= Right (Left $ Date $ DateSpan (Just $ fromGregorian 2018 01 01) (Just $ fromGregorian 2018 04 01)) + parseQueryTerm nulldate "inacct:a" @?= Right (Right $ QueryOptInAcct "a") + parseQueryTerm nulldate "tag:a" @?= Right (Left $ Tag (toRegexCI' "a") Nothing) + parseQueryTerm nulldate "tag:a=some value" @?= Right (Left $ Tag (toRegexCI' "a") (Just $ toRegexCI' "some value")) + parseQueryTerm nulldate "amt:<0" @?= Right (Left $ Amt Lt 0) + parseQueryTerm nulldate "amt:>10000.10" @?= Right (Left $ Amt AbsGt 10000.1) ,test "parseAmountQueryTerm" $ do - parseAmountQueryTerm "<0" @?= (Lt,0) -- special case for convenience, since AbsLt 0 would be always false - parseAmountQueryTerm ">0" @?= (Gt,0) -- special case for convenience and consistency with above - parseAmountQueryTerm ">10000.10" @?= (AbsGt,10000.1) - parseAmountQueryTerm "=0.23" @?= (AbsEq,0.23) - parseAmountQueryTerm "0.23" @?= (AbsEq,0.23) - parseAmountQueryTerm "<=+0.23" @?= (LtEq,0.23) - parseAmountQueryTerm "-0.23" @?= (Eq,(-0.23)) - -- ,test "number beginning with decimal mark" $ parseAmountQueryTerm "=.23" @?= (AbsEq,0.23) -- XXX + parseAmountQueryTerm "<0" @?= Right (Lt,0) -- special case for convenience, since AbsLt 0 would be always false + parseAmountQueryTerm ">0" @?= Right (Gt,0) -- special case for convenience and consistency with above + parseAmountQueryTerm " > - 0 " @?= Right (Gt,0) -- accept whitespace around the argument parts + parseAmountQueryTerm ">10000.10" @?= Right (AbsGt,10000.1) + parseAmountQueryTerm "=0.23" @?= Right (AbsEq,0.23) + parseAmountQueryTerm "0.23" @?= Right (AbsEq,0.23) + parseAmountQueryTerm "<=+0.23" @?= Right (LtEq,0.23) + parseAmountQueryTerm "-0.23" @?= Right (Eq,(-0.23)) + assertLeft $ parseAmountQueryTerm "-0,23" + assertLeft $ parseAmountQueryTerm "=.23" ,test "queryStartDate" $ do let small = Just $ fromGregorian 2000 01 01 @@ -745,14 +757,14 @@ queryEndDate False (Or [Date $ DateSpan Nothing small, Date $ DateSpan Nothing Nothing]) @?= Nothing ,test "matchesAccount" $ do - assertBool "" $ (Acct "b:c") `matchesAccount` "a:bb:c:d" - assertBool "" $ not $ (Acct "^a:b") `matchesAccount` "c:a:b" + assertBool "" $ (Acct $ toRegex' "b:c") `matchesAccount` "a:bb:c:d" + assertBool "" $ not $ (Acct $ toRegex' "^a:b") `matchesAccount` "c:a:b" assertBool "" $ Depth 2 `matchesAccount` "a" assertBool "" $ Depth 2 `matchesAccount` "a:b" assertBool "" $ not $ Depth 2 `matchesAccount` "a:b:c" assertBool "" $ Date nulldatespan `matchesAccount` "a" assertBool "" $ Date2 nulldatespan `matchesAccount` "a" - assertBool "" $ not $ (Tag "a" Nothing) `matchesAccount` "a" + assertBool "" $ not $ Tag (toRegex' "a") Nothing `matchesAccount` "a" ,tests "matchesPosting" [ test "positive match on cleared posting status" $ @@ -768,32 +780,33 @@ ,test "real:1 on real posting" $ assertBool "" $ (Real True) `matchesPosting` nullposting{ptype=RegularPosting} ,test "real:1 on virtual posting fails" $ assertBool "" $ not $ (Real True) `matchesPosting` nullposting{ptype=VirtualPosting} ,test "real:1 on balanced virtual posting fails" $ assertBool "" $ not $ (Real True) `matchesPosting` nullposting{ptype=BalancedVirtualPosting} - ,test "acct:" $ assertBool "" $ (Acct "'b") `matchesPosting` nullposting{paccount="'b"} + ,test "acct:" $ assertBool "" $ (Acct $ toRegex' "'b") `matchesPosting` nullposting{paccount="'b"} ,test "tag:" $ do - assertBool "" $ not $ (Tag "a" (Just "r$")) `matchesPosting` nullposting - assertBool "" $ (Tag "foo" Nothing) `matchesPosting` nullposting{ptags=[("foo","")]} - assertBool "" $ (Tag "foo" Nothing) `matchesPosting` nullposting{ptags=[("foo","baz")]} - assertBool "" $ (Tag "foo" (Just "a")) `matchesPosting` nullposting{ptags=[("foo","bar")]} - assertBool "" $ not $ (Tag "foo" (Just "a$")) `matchesPosting` nullposting{ptags=[("foo","bar")]} - assertBool "" $ not $ (Tag " foo " (Just "a")) `matchesPosting` nullposting{ptags=[("foo","bar")]} - assertBool "" $ not $ (Tag "foo foo" (Just " ar ba ")) `matchesPosting` nullposting{ptags=[("foo foo","bar bar")]} - ,test "a tag match on a posting also sees inherited tags" $ assertBool "" $ (Tag "txntag" Nothing) `matchesPosting` nullposting{ptransaction=Just nulltransaction{ttags=[("txntag","")]}} + assertBool "" $ not $ (Tag (toRegex' "a") (Just $ toRegex' "r$")) `matchesPosting` nullposting + assertBool "" $ (Tag (toRegex' "foo") Nothing) `matchesPosting` nullposting{ptags=[("foo","")]} + assertBool "" $ (Tag (toRegex' "foo") Nothing) `matchesPosting` nullposting{ptags=[("foo","baz")]} + assertBool "" $ (Tag (toRegex' "foo") (Just $ toRegex' "a")) `matchesPosting` nullposting{ptags=[("foo","bar")]} + assertBool "" $ not $ (Tag (toRegex' "foo") (Just $ toRegex' "a$")) `matchesPosting` nullposting{ptags=[("foo","bar")]} + assertBool "" $ not $ (Tag (toRegex' " foo ") (Just $ toRegex' "a")) `matchesPosting` nullposting{ptags=[("foo","bar")]} + assertBool "" $ not $ (Tag (toRegex' "foo foo") (Just $ toRegex' " ar ba ")) `matchesPosting` nullposting{ptags=[("foo foo","bar bar")]} + ,test "a tag match on a posting also sees inherited tags" $ assertBool "" $ (Tag (toRegex' "txntag") Nothing) `matchesPosting` nullposting{ptransaction=Just nulltransaction{ttags=[("txntag","")]}} ,test "cur:" $ do - assertBool "" $ not $ (Sym "$") `matchesPosting` nullposting{pamount=Mixed [usd 1]} -- becomes "^$$", ie testing for null symbol - assertBool "" $ (Sym "\\$") `matchesPosting` nullposting{pamount=Mixed [usd 1]} -- have to quote $ for regexpr - assertBool "" $ (Sym "shekels") `matchesPosting` nullposting{pamount=Mixed [nullamt{acommodity="shekels"}]} - assertBool "" $ not $ (Sym "shek") `matchesPosting` nullposting{pamount=Mixed [nullamt{acommodity="shekels"}]} + let toSym = either id (const $ error' "No query opts") . either error' id . parseQueryTerm (fromGregorian 2000 01 01) . ("cur:"<>) + assertBool "" $ not $ toSym "$" `matchesPosting` nullposting{pamount=Mixed [usd 1]} -- becomes "^$$", ie testing for null symbol + assertBool "" $ (toSym "\\$") `matchesPosting` nullposting{pamount=Mixed [usd 1]} -- have to quote $ for regexpr + assertBool "" $ (toSym "shekels") `matchesPosting` nullposting{pamount=Mixed [nullamt{acommodity="shekels"}]} + assertBool "" $ not $ (toSym "shek") `matchesPosting` nullposting{pamount=Mixed [nullamt{acommodity="shekels"}]} ] ,test "matchesTransaction" $ do assertBool "" $ Any `matchesTransaction` nulltransaction - assertBool "" $ not $ (Desc "x x") `matchesTransaction` nulltransaction{tdescription="x"} - assertBool "" $ (Desc "x x") `matchesTransaction` nulltransaction{tdescription="x x"} + assertBool "" $ not $ (Desc $ toRegex' "x x") `matchesTransaction` nulltransaction{tdescription="x"} + assertBool "" $ (Desc $ toRegex' "x x") `matchesTransaction` nulltransaction{tdescription="x x"} -- see posting for more tag tests - assertBool "" $ (Tag "foo" (Just "a")) `matchesTransaction` nulltransaction{ttags=[("foo","bar")]} - assertBool "" $ (Tag "payee" (Just "payee")) `matchesTransaction` nulltransaction{tdescription="payee|note"} - assertBool "" $ (Tag "note" (Just "note")) `matchesTransaction` nulltransaction{tdescription="payee|note"} + assertBool "" $ (Tag (toRegex' "foo") (Just $ toRegex' "a")) `matchesTransaction` nulltransaction{ttags=[("foo","bar")]} + assertBool "" $ (Tag (toRegex' "payee") (Just $ toRegex' "payee")) `matchesTransaction` nulltransaction{tdescription="payee|note"} + assertBool "" $ (Tag (toRegex' "note") (Just $ toRegex' "note")) `matchesTransaction` nulltransaction{tdescription="payee|note"} -- a tag match on a transaction also matches posting tags - assertBool "" $ (Tag "postingtag" Nothing) `matchesTransaction` nulltransaction{tpostings=[nullposting{ptags=[("postingtag","")]}]} + assertBool "" $ (Tag (toRegex' "postingtag") Nothing) `matchesTransaction` nulltransaction{tpostings=[nullposting{ptags=[("postingtag","")]}]} ] diff -Nru haskell-hledger-lib-1.18.1/Hledger/Read/Common.hs haskell-hledger-lib-1.19.1/Hledger/Read/Common.hs --- haskell-hledger-lib-1.18.1/Hledger/Read/Common.hs 2020-06-04 21:01:48.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Read/Common.hs 2020-09-07 18:57:06.000000000 +0000 @@ -12,20 +12,19 @@ -} --- ** language -{-# LANGUAGE BangPatterns #-} -{-# LANGUAGE CPP #-} -{-# LANGUAGE DeriveDataTypeable #-} -{-# LANGUAGE FlexibleContexts #-} -{-# LANGUAGE LambdaCase #-} -{-# LANGUAGE NamedFieldPuns #-} -{-# LANGUAGE NoMonoLocalBinds #-} -{-# LANGUAGE OverloadedStrings #-} -{-# LANGUAGE PackageImports #-} -{-# LANGUAGE Rank2Types #-} -{-# LANGUAGE RecordWildCards #-} +{-# LANGUAGE BangPatterns #-} +{-# LANGUAGE CPP #-} +{-# LANGUAGE FlexibleContexts #-} +{-# LANGUAGE LambdaCase #-} +{-# LANGUAGE NamedFieldPuns #-} +{-# LANGUAGE NoMonoLocalBinds #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE PackageImports #-} +{-# LANGUAGE Rank2Types #-} +{-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} -{-# LANGUAGE TupleSections #-} -{-# LANGUAGE TypeFamilies #-} +{-# LANGUAGE TupleSections #-} +{-# LANGUAGE TypeFamilies #-} --- ** exports module Hledger.Read.Common ( @@ -105,6 +104,9 @@ singlespacedtextsatisfyingp, singlespacep, + skipNonNewlineSpaces, + skipNonNewlineSpaces1, + -- * tests tests_Common, ) @@ -113,31 +115,33 @@ --- ** imports import Prelude () import "base-compat-batteries" Prelude.Compat hiding (fail, readFile) +import Control.Applicative.Permutations (runPermutation, toPermutationWithDefault) import qualified "base-compat-batteries" Control.Monad.Fail.Compat as Fail (fail) import Control.Monad.Except (ExceptT(..), runExceptT, throwError) import Control.Monad.State.Strict hiding (fail) import Data.Bifunctor (bimap, second) -import Data.Char -import Data.Data +import Data.Char (digitToInt, isDigit, isSpace) import Data.Decimal (DecimalRaw (Decimal), Decimal) -import Data.Default +import Data.Default (Default(..)) import Data.Function ((&)) -import Data.Functor.Identity +import Data.Functor.Identity (Identity) import "base-compat-batteries" Data.List.Compat import Data.List.NonEmpty (NonEmpty(..)) -import Data.Maybe +import Data.Maybe (catMaybes, fromMaybe, isJust, listToMaybe) import qualified Data.Map as M import qualified Data.Semigroup as Sem import Data.Text (Text) import qualified Data.Text as T -import Data.Time.Calendar -import Data.Time.LocalTime +import Data.Time.Calendar (Day, fromGregorianValid, toGregorian) +import Data.Time.LocalTime (LocalTime(..), TimeOfDay(..)) +import Data.Word (Word8) import System.Time (getClockTime) import Text.Megaparsec -import Text.Megaparsec.Char +import Text.Megaparsec.Char (char, char', digitChar, newline, string) import Text.Megaparsec.Char.Lexer (decimal) import Text.Megaparsec.Custom -import Control.Applicative.Permutations + (FinalParseError, attachSource, customErrorBundlePretty, + finalErrorBundlePretty, parseErrorAt, parseErrorAtRegion) import Hledger.Data import Hledger.Utils @@ -190,7 +194,7 @@ ,new_save_ :: Bool -- ^ save latest new transactions state for next time ,pivot_ :: String -- ^ use the given field's value as the account name ,auto_ :: Bool -- ^ generate automatic postings when journal is parsed - } deriving (Show, Data) --, Typeable) + } deriving (Show) instance Default InputOpts where def = definputopts @@ -237,14 +241,13 @@ rejp = runErroringJournalParser genericSourcePos :: SourcePos -> GenericSourcePos -genericSourcePos p = GenericSourcePos (sourceName p) (fromIntegral . unPos $ sourceLine p) (fromIntegral . unPos $ sourceColumn p) +genericSourcePos p = GenericSourcePos (sourceName p) (unPos $ sourceLine p) (unPos $ sourceColumn p) -- | Construct a generic start & end line parse position from start and end megaparsec SourcePos's. journalSourcePos :: SourcePos -> SourcePos -> GenericSourcePos -journalSourcePos p p' = JournalSourcePos (sourceName p) (fromIntegral . unPos $ sourceLine p, fromIntegral $ line') - where line' - | (unPos $ sourceColumn p') == 1 = unPos (sourceLine p') - 1 - | otherwise = unPos $ sourceLine p' -- might be at end of file withat last new-line +journalSourcePos p p' = JournalSourcePos (sourceName p) (unPos $ sourceLine p, line') + where line' | (unPos $ sourceColumn p') == 1 = unPos (sourceLine p') - 1 + | otherwise = unPos $ sourceLine p' -- might be at end of file withat last new-line -- | Given a parser to ParsedJournal, input options, file path and -- content: run the parser on the content, and finalise the result to @@ -295,6 +298,7 @@ journalFinalise :: InputOpts -> FilePath -> Text -> Journal -> ExceptT String IO Journal journalFinalise iopts f txt pj = do t <- liftIO getClockTime + d <- liftIO getCurrentDay -- Infer and apply canonical styles for each commodity (or fail). -- This affects transaction balancing/assertions/assignments, so needs to be done early. -- (TODO: since #903's refactoring for hledger 1.12, @@ -319,11 +323,13 @@ -- then add the auto postings -- (Note adding auto postings after balancing means #893b fails; -- adding them before balancing probably means #893a, #928, #938 fail.) - let j'' = journalModifyTransactions j' - -- then apply commodity styles once more, to style the auto posting amounts. (XXX inefficient ?) - j''' <- journalApplyCommodityStyles j'' - -- then check balance assertions. - journalBalanceTransactions (not $ ignore_assertions_ iopts) j''' + case journalModifyTransactions d j' of + Left e -> throwError e + Right j'' -> do + -- then apply commodity styles once more, to style the auto posting amounts. (XXX inefficient ?) + j''' <- journalApplyCommodityStyles j'' + -- then check balance assertions. + journalBalanceTransactions (not $ ignore_assertions_ iopts) j''' ) & fmap journalInferMarketPricesFromTransactions -- infer market prices from commodity-exchanging transactions @@ -412,15 +418,15 @@ statusp :: TextParser m Status statusp = choice' - [ skipMany spacenonewline >> char '*' >> return Cleared - , skipMany spacenonewline >> char '!' >> return Pending + [ skipNonNewlineSpaces >> char '*' >> return Cleared + , skipNonNewlineSpaces >> char '!' >> return Pending , return Unmarked ] codep :: TextParser m Text codep = option "" $ do try $ do - skipSome spacenonewline + skipNonNewlineSpaces1 char '(' code <- takeWhileP Nothing $ \c -> c /= ')' && c /= '\n' char ')' "closing bracket ')' for transaction code" @@ -443,45 +449,44 @@ datep' :: Maybe Year -> TextParser m Day datep' mYear = do - startOffset <- getOffset - d1 <- decimal "year or month" - sep <- satisfy isDateSepChar "date separator" - d2 <- decimal "month or day" - fullDate startOffset d1 sep d2 <|> partialDate startOffset mYear d1 sep d2 - "full or partial date" - + startOffset <- getOffset + d1 <- yearorintp "year or month" + sep <- datesepchar "date separator" + d2 <- decimal "month or day" + case d1 of + Left y -> fullDate startOffset y sep d2 + Right m -> partialDate startOffset mYear m sep d2 + "full or partial date" where - - fullDate :: Int -> Integer -> Char -> Int -> TextParser m Day - fullDate startOffset year sep1 month = do - sep2 <- satisfy isDateSepChar "date separator" - day <- decimal "day" - endOffset <- getOffset - let dateStr = show year ++ [sep1] ++ show month ++ [sep2] ++ show day - - when (sep1 /= sep2) $ customFailure $ parseErrorAtRegion startOffset endOffset $ - "invalid date (mixing date separators is not allowed): " ++ dateStr - - case fromGregorianValid year month day of - Nothing -> customFailure $ parseErrorAtRegion startOffset endOffset $ - "well-formed but invalid date: " ++ dateStr - Just date -> pure $! date - - partialDate - :: Int -> Maybe Year -> Integer -> Char -> Int -> TextParser m Day - partialDate startOffset mYear month sep day = do - endOffset <- getOffset - case mYear of - Just year -> - case fromGregorianValid year (fromIntegral month) day of - Nothing -> customFailure $ parseErrorAtRegion startOffset endOffset $ - "well-formed but invalid date: " ++ dateStr - Just date -> pure $! date - where dateStr = show year ++ [sep] ++ show month ++ [sep] ++ show day - - Nothing -> customFailure $ parseErrorAtRegion startOffset endOffset $ - "partial date "++dateStr++" found, but the current year is unknown" - where dateStr = show month ++ [sep] ++ show day + fullDate :: Int -> Year -> Char -> Month -> TextParser m Day + fullDate startOffset year sep1 month = do + sep2 <- satisfy isDateSepChar "date separator" + day <- decimal "day" + endOffset <- getOffset + let dateStr = show year ++ [sep1] ++ show month ++ [sep2] ++ show day + + when (sep1 /= sep2) $ customFailure $ parseErrorAtRegion startOffset endOffset $ + "invalid date (mixing date separators is not allowed): " ++ dateStr + + case fromGregorianValid year month day of + Nothing -> customFailure $ parseErrorAtRegion startOffset endOffset $ + "well-formed but invalid date: " ++ dateStr + Just date -> pure $! date + + partialDate :: Int -> Maybe Year -> Month -> Char -> MonthDay -> TextParser m Day + partialDate startOffset mYear month sep day = do + endOffset <- getOffset + case mYear of + Just year -> + case fromGregorianValid year month day of + Nothing -> customFailure $ parseErrorAtRegion startOffset endOffset $ + "well-formed but invalid date: " ++ dateStr + Just date -> pure $! date + where dateStr = show year ++ [sep] ++ show month ++ [sep] ++ show day + + Nothing -> customFailure $ parseErrorAtRegion startOffset endOffset $ + "partial date "++dateStr++" found, but the current year is unknown" + where dateStr = show month ++ [sep] ++ show day {-# INLINABLE datep' #-} @@ -499,7 +504,7 @@ datetimep' :: Maybe Year -> TextParser m LocalTime datetimep' mYear = do day <- datep' mYear - skipSome spacenonewline + skipNonNewlineSpaces1 time <- timeOfDay optional timeZone -- ignoring time zones pure $ LocalTime day time @@ -548,21 +553,36 @@ secondarydatep primaryDate = char '=' *> datep' (Just primaryYear) where primaryYear = first3 $ toGregorian primaryDate +-- | Parse a year number or an Int. Years must contain at least four +-- digits. +yearorintp :: TextParser m (Either Year Int) +yearorintp = do + yearOrMonth <- takeWhile1P (Just "digit") isDigit + let n = readDecimal yearOrMonth + return $ if T.length yearOrMonth >= 4 then Left n else Right (fromInteger n) + --- *** account names -- | Parse an account name (plus one following space if present), -- then apply any parent account prefix and/or account aliases currently in effect, -- in that order. (Ie first add the parent account prefix, then rewrite with aliases). +-- This calls error if any account alias with an invalid regular expression exists. modifiedaccountnamep :: JournalParser m AccountName modifiedaccountnamep = do - parent <- getParentAccount + parent <- getParentAccount aliases <- getAccountAliases - a <- lift accountnamep - return $! - accountNameApplyAliases aliases $ - -- XXX accountNameApplyAliasesMemo ? doesn't seem to make a difference (retest that function) - joinAccountNames parent - a + -- off1 <- getOffset + a <- lift accountnamep + -- off2 <- getOffset + -- XXX or accountNameApplyAliasesMemo ? doesn't seem to make a difference (retest that function) + case accountNameApplyAliases aliases $ joinAccountNames parent a of + Right a' -> return $! a' + -- should not happen, regexaliasp will have displayed a better error already: + -- (XXX why does customFailure cause error to be displayed there, but not here ?) + -- Left e -> customFailure $! parseErrorAtRegion off1 off2 err + Left e -> error' err -- PARTIAL: + where + err = "problem in account alias applied to "++T.unpack a++": "++e -- | Parse an account name, plus one following space if present. -- Account names have one or more parts separated by the account separator character, @@ -595,7 +615,7 @@ -- | Parse one non-newline whitespace character that is not followed by another one. singlespacep :: TextParser m () -singlespacep = void spacenonewline *> notFollowedBy spacenonewline +singlespacep = spacenonewline *> notFollowedBy spacenonewline --- *** amounts @@ -605,7 +625,7 @@ spaceandamountormissingp :: JournalParser m MixedAmount spaceandamountormissingp = option missingmixedamt $ try $ do - lift $ skipSome spacenonewline + lift $ skipNonNewlineSpaces1 Mixed . (:[]) <$> amountp -- | Parse a single-commodity amount, with optional symbol on the left @@ -614,7 +634,7 @@ -- lot date. A lot price and lot date will be ignored. amountp :: JournalParser m Amount amountp = label "amount" $ do - let spaces = lift $ skipMany spacenonewline + let spaces = lift $ skipNonNewlineSpaces amount <- amountwithoutpricep <* spaces (mprice, _elotprice, _elotdate) <- runPermutation $ (,,) <$> toPermutationWithDefault Nothing (Just <$> priceamountp <* spaces) @@ -625,7 +645,7 @@ -- XXX Just like amountp but don't allow lot prices. Needed for balanceassertionp. amountpnolotprices :: JournalParser m Amount amountpnolotprices = label "amount" $ do - let spaces = lift $ skipMany spacenonewline + let spaces = lift $ skipNonNewlineSpaces amount <- amountwithoutpricep spaces mprice <- optional $ priceamountp <* spaces @@ -642,7 +662,7 @@ leftsymbolamountp mult sign = label "amount" $ do c <- lift commoditysymbolp suggestedStyle <- getAmountStyle c - commodityspaced <- lift $ skipMany' spacenonewline + commodityspaced <- lift skipNonNewlineSpaces' sign2 <- lift $ signp offBeforeNum <- getOffset ambiguousRawNum <- lift rawnumberp @@ -660,7 +680,7 @@ mExponent <- lift $ optional $ try exponentp offAfterNum <- getOffset let numRegion = (offBeforeNum, offAfterNum) - mSpaceAndCommodity <- lift $ optional $ try $ (,) <$> skipMany' spacenonewline <*> commoditysymbolp + mSpaceAndCommodity <- lift $ optional $ try $ (,) <$> skipNonNewlineSpaces' <*> commoditysymbolp case mSpaceAndCommodity of -- right symbol amount Just (commodityspaced, c) -> do @@ -686,21 +706,21 @@ :: (Int, Int) -- offsets -> Maybe AmountStyle -> Either AmbiguousNumber RawNumber - -> Maybe Int - -> TextParser m (Quantity, Int, Maybe Char, Maybe DigitGroupStyle) + -> Maybe Integer + -> TextParser m (Quantity, AmountPrecision, Maybe Char, Maybe DigitGroupStyle) interpretNumber posRegion suggestedStyle ambiguousNum mExp = let rawNum = either (disambiguateNumber suggestedStyle) id ambiguousNum in case fromRawNumber rawNum mExp of Left errMsg -> customFailure $ uncurry parseErrorAtRegion posRegion errMsg - Right res -> pure res + Right (q,p,d,g) -> pure (q, Precision p, d, g) -- | Parse an amount from a string, or get an error. amountp' :: String -> Amount amountp' s = case runParser (evalStateT (amountp <* eof) nulljournal) "" (T.pack s) of Right amt -> amt - Left err -> error' $ show err -- XXX should throwError + Left err -> error' $ show err -- PARTIAL: XXX should throwError -- | Parse a mixed amount from a string, or get an error. mamountp' :: String -> MixedAmount @@ -709,23 +729,11 @@ -- | Parse a minus or plus sign followed by zero or more spaces, -- or nothing, returning a function that negates or does nothing. signp :: Num a => TextParser m (a -> a) -signp = ((char '-' *> pure negate <|> char '+' *> pure id) <* many spacenonewline) <|> pure id +signp = ((char '-' *> pure negate <|> char '+' *> pure id) <* skipNonNewlineSpaces) <|> pure id multiplierp :: TextParser m Bool multiplierp = option False $ char '*' *> pure True --- | This is like skipMany but it returns True if at least one element --- was skipped. This is helpful if you’re just using many to check if --- the resulting list is empty or not. -skipMany' :: MonadPlus m => m a -> m Bool -skipMany' p = go False - where - go !isNull = do - more <- option False (True <$ p) - if more - then go True - else pure isNull - commoditysymbolp :: TextParser m CommoditySymbol commoditysymbolp = quotedcommoditysymbolp <|> simplecommoditysymbolp "commodity symbol" @@ -746,7 +754,7 @@ priceConstructor <- char '@' *> pure TotalPrice <|> pure UnitPrice when parenthesised $ void $ char ')' - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces priceAmount <- amountwithoutpricep -- "unpriced amount (specifying a price)" pure $ priceConstructor priceAmount @@ -757,7 +765,7 @@ char '=' istotal <- fmap isJust $ optional $ try $ char '=' isinclusive <- fmap isJust $ optional $ try $ char '*' - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces -- this amount can have a price; balance assertions ignore it, -- but balance assignments will use it a <- amountpnolotprices "amount (for a balance assertion or assignment)" @@ -776,10 +784,10 @@ lotpricep = label "ledger-style lot price" $ do char '{' doublebrace <- option False $ char '{' >> pure True - _fixed <- fmap isJust $ optional $ lift (skipMany spacenonewline) >> char '=' - lift (skipMany spacenonewline) + _fixed <- fmap isJust $ optional $ lift skipNonNewlineSpaces >> char '=' + lift skipNonNewlineSpaces _a <- amountwithoutpricep - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces char '}' when (doublebrace) $ void $ char '}' return () @@ -789,9 +797,9 @@ lotdatep :: JournalParser m () lotdatep = (do char '[' - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces _d <- datep - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces char ']' return () ) "ledger-style lot date" @@ -808,7 +816,7 @@ -- seen following the decimal mark), the decimal mark character used if any, -- and the digit group style if any. -- -numberp :: Maybe AmountStyle -> TextParser m (Quantity, Int, Maybe Char, Maybe DigitGroupStyle) +numberp :: Maybe AmountStyle -> TextParser m (Quantity, Word8, Maybe Char, Maybe DigitGroupStyle) numberp suggestedStyle = label "number" $ do -- a number is an optional sign followed by a sequence of digits possibly -- interspersed with periods, commas, or both @@ -816,13 +824,13 @@ sign <- signp rawNum <- either (disambiguateNumber suggestedStyle) id <$> rawnumberp mExp <- optional $ try $ exponentp - dbg8 "numberp suggestedStyle" suggestedStyle `seq` return () - case dbg8 "numberp quantity,precision,mdecimalpoint,mgrps" + dbg7 "numberp suggestedStyle" suggestedStyle `seq` return () + case dbg7 "numberp quantity,precision,mdecimalpoint,mgrps" $ fromRawNumber rawNum mExp of Left errMsg -> Fail.fail errMsg Right (q, p, d, g) -> pure (sign q, p, d, g) -exponentp :: TextParser m Int +exponentp :: TextParser m Integer exponentp = char' 'e' *> signp <*> decimal "exponent" -- | Interpret a raw number as a decimal number. @@ -834,50 +842,40 @@ -- - the digit group style, if any (digit group character and sizes of digit groups) fromRawNumber :: RawNumber - -> Maybe Int + -> Maybe Integer -> Either String - (Quantity, Int, Maybe Char, Maybe DigitGroupStyle) -fromRawNumber raw mExp = case raw of - - NoSeparators digitGrp mDecimals -> - let mDecPt = fmap fst mDecimals - decimalGrp = maybe mempty snd mDecimals - - (quantity, precision) = - maybe id applyExp mExp $ toQuantity digitGrp decimalGrp - - in Right (quantity, precision, mDecPt, Nothing) - - WithSeparators digitSep digitGrps mDecimals -> case mExp of - Nothing -> - let mDecPt = fmap fst mDecimals - decimalGrp = maybe mempty snd mDecimals - digitGroupStyle = DigitGroups digitSep (groupSizes digitGrps) - - (quantity, precision) = toQuantity (mconcat digitGrps) decimalGrp - - in Right (quantity, precision, mDecPt, Just digitGroupStyle) - Just _ -> Left - "invalid number: mixing digit separators with exponents is not allowed" - + (Quantity, Word8, Maybe Char, Maybe DigitGroupStyle) +fromRawNumber (WithSeparators _ _ _) (Just _) = + Left "invalid number: mixing digit separators with exponents is not allowed" +fromRawNumber raw mExp = do + (quantity, precision) <- toQuantity (fromMaybe 0 mExp) (digitGroup raw) (decimalGroup raw) + return (quantity, precision, mDecPt raw, digitGroupStyle raw) where + toQuantity :: Integer -> DigitGrp -> DigitGrp -> Either String (Quantity, Word8) + toQuantity e preDecimalGrp postDecimalGrp + | precision < 0 = Right (Decimal 0 (digitGrpNum * 10^(-precision)), 0) + | precision < 256 = Right (Decimal precision8 digitGrpNum, precision8) + | otherwise = Left "invalid number: numbers with more than 255 decimal digits are not allowed at this time" + where + digitGrpNum = digitGroupNumber $ preDecimalGrp <> postDecimalGrp + precision = toInteger (digitGroupLength postDecimalGrp) - e + precision8 = fromIntegral precision :: Word8 + + mDecPt (NoSeparators _ mDecimals) = fst <$> mDecimals + mDecPt (WithSeparators _ _ mDecimals) = fst <$> mDecimals + decimalGroup (NoSeparators _ mDecimals) = maybe mempty snd mDecimals + decimalGroup (WithSeparators _ _ mDecimals) = maybe mempty snd mDecimals + digitGroup (NoSeparators digitGrp _) = digitGrp + digitGroup (WithSeparators _ digitGrps _) = mconcat digitGrps + digitGroupStyle (NoSeparators _ _) = Nothing + digitGroupStyle (WithSeparators sep grps _) = Just . DigitGroups sep $ groupSizes grps + -- Outputs digit group sizes from least significant to most significant - groupSizes :: [DigitGrp] -> [Int] - groupSizes digitGrps = reverse $ case map digitGroupLength digitGrps of + groupSizes :: [DigitGrp] -> [Word8] + groupSizes digitGrps = reverse $ case map (fromIntegral . digitGroupLength) digitGrps of (a:b:cs) | a < b -> b:cs gs -> gs - toQuantity :: DigitGrp -> DigitGrp -> (Quantity, Int) - toQuantity preDecimalGrp postDecimalGrp = (quantity, precision) - where - quantity = Decimal (fromIntegral precision) - (digitGroupNumber $ preDecimalGrp <> postDecimalGrp) - precision = digitGroupLength postDecimalGrp - - applyExp :: Int -> (Decimal, Int) -> (Decimal, Int) - applyExp exponent (quantity, precision) = - (quantity * 10^^exponent, max 0 (precision - exponent)) - disambiguateNumber :: Maybe AmountStyle -> AmbiguousNumber -> RawNumber disambiguateNumber suggestedStyle (AmbiguousNumber grp1 sep grp2) = @@ -892,7 +890,7 @@ isValidDecimalBy c = \case AmountStyle{asdecimalpoint = Just d} -> d == c AmountStyle{asdigitgroups = Just (DigitGroups g _)} -> g /= c - AmountStyle{asprecision = 0} -> False + AmountStyle{asprecision = Precision 0} -> False _ -> True -- | Parse and interpret the structure of a number without external hints. @@ -930,7 +928,7 @@ parseErrorAt off "invalid number (excessive trailing digits)" Nothing -> pure () - return $ dbg8 "rawnumberp" rawNumber + return $ dbg7 "rawnumberp" rawNumber where leadingDecimalPt :: TextParser m RawNumber @@ -1003,17 +1001,17 @@ -- | Description of a single digit group in a number literal. -- "Thousands" is one well known digit grouping, but there are others. data DigitGrp = DigitGrp { - digitGroupLength :: !Int, -- ^ The number of digits in this group. - digitGroupNumber :: !Integer -- ^ The natural number formed by this group's digits. + digitGroupLength :: !Word, -- ^ The number of digits in this group. + -- This is Word to avoid the need to do overflow + -- checking for the Semigroup instance of DigitGrp. + digitGroupNumber :: !Integer -- ^ The natural number formed by this group's digits. This should always be positive. } deriving (Eq) -- | A custom show instance, showing digit groups as the parser saw them. instance Show DigitGrp where - show (DigitGrp len num) - | len > 0 = "\"" ++ padding ++ numStr ++ "\"" - | otherwise = "\"\"" + show (DigitGrp len num) = "\"" ++ padding ++ numStr ++ "\"" where numStr = show num - padding = replicate (len - length numStr) '0' + padding = genericReplicate (toInteger len - toInteger (length numStr)) '0' instance Sem.Semigroup DigitGrp where DigitGrp l1 n1 <> DigitGrp l2 n2 = DigitGrp (l1 + l2) (n1 * 10^l2 + n2) @@ -1037,7 +1035,7 @@ startComment = string "comment" *> trailingSpaces endComment = eof <|> string "end comment" *> trailingSpaces - trailingSpaces = skipMany spacenonewline <* newline + trailingSpaces = skipNonNewlineSpaces <* newline anyLine = void $ takeWhileP Nothing (\c -> c /= '\n') *> newline {-# INLINABLE multilinecommentp #-} @@ -1047,7 +1045,7 @@ -- is semicolon, hash, or star. emptyorcommentlinep :: TextParser m () emptyorcommentlinep = do - skipMany spacenonewline + skipNonNewlineSpaces skiplinecommentp <|> void newline where skiplinecommentp :: TextParser m () @@ -1076,13 +1074,13 @@ -- followingcommentp' :: (Monoid a, Show a) => TextParser m a -> TextParser m (Text, a) followingcommentp' contentp = do - skipMany spacenonewline + skipNonNewlineSpaces -- there can be 0 or 1 sameLine sameLine <- try headerp *> ((:[]) <$> match' contentp) <|> pure [] _ <- eolof -- there can be 0 or more nextLines nextLines <- many $ - try (skipSome spacenonewline *> headerp) *> match' contentp <* eolof + try (skipNonNewlineSpaces1 *> headerp) *> match' contentp <* eolof let -- if there's just a next-line comment, insert an empty same-line comment -- so the next-line comment doesn't get rendered as a same-line comment. @@ -1094,7 +1092,7 @@ pure (strippedCommentText, commentContent) where - headerp = char ';' *> skipMany spacenonewline + headerp = char ';' *> skipNonNewlineSpaces {-# INLINABLE followingcommentp' #-} @@ -1158,7 +1156,7 @@ if T.null name then commenttagsp else do - skipMany spacenonewline + skipNonNewlineSpaces val <- tagValue let tag = (name, val) (tag:) <$> commenttagsp @@ -1256,7 +1254,7 @@ atColon :: Text -> TextParser m ([Tag], [DateTag]) atColon name = char ':' *> do - skipMany spacenonewline + skipNonNewlineSpaces (tags, dateTags) <- case name of "" -> pure ([], []) "date" -> dateValue name @@ -1342,38 +1340,38 @@ tests "amountp" [ test "basic" $ assertParseEq amountp "$47.18" (usd 47.18) - ,test "ends with decimal mark" $ assertParseEq amountp "$1." (usd 1 `withPrecision` 0) + ,test "ends with decimal mark" $ assertParseEq amountp "$1." (usd 1 `withPrecision` Precision 0) ,test "unit price" $ assertParseEq amountp "$10 @ €0.5" -- not precise enough: -- (usd 10 `withPrecision` 0 `at` (eur 0.5 `withPrecision` 1)) -- `withStyle` asdecimalpoint=Just '.' amount{ acommodity="$" ,aquantity=10 -- need to test internal precision with roundTo ? I think not - ,astyle=amountstyle{asprecision=0, asdecimalpoint=Nothing} + ,astyle=amountstyle{asprecision=Precision 0, asdecimalpoint=Nothing} ,aprice=Just $ UnitPrice $ amount{ acommodity="€" ,aquantity=0.5 - ,astyle=amountstyle{asprecision=1, asdecimalpoint=Just '.'} + ,astyle=amountstyle{asprecision=Precision 1, asdecimalpoint=Just '.'} } } ,test "total price" $ assertParseEq amountp "$10 @@ €5" amount{ acommodity="$" ,aquantity=10 - ,astyle=amountstyle{asprecision=0, asdecimalpoint=Nothing} + ,astyle=amountstyle{asprecision=Precision 0, asdecimalpoint=Nothing} ,aprice=Just $ TotalPrice $ amount{ acommodity="€" ,aquantity=5 - ,astyle=amountstyle{asprecision=0, asdecimalpoint=Nothing} + ,astyle=amountstyle{asprecision=Precision 0, asdecimalpoint=Nothing} } } ,test "unit price, parenthesised" $ assertParse amountp "$10 (@) €0.5" ,test "total price, parenthesised" $ assertParse amountp "$10 (@@) €0.5" ] - ,let p = lift (numberp Nothing) :: JournalParser IO (Quantity, Int, Maybe Char, Maybe DigitGroupStyle) in + ,let p = lift (numberp Nothing) :: JournalParser IO (Quantity, Word8, Maybe Char, Maybe DigitGroupStyle) in test "numberp" $ do assertParseEq p "0" (0, 0, Nothing, Nothing) assertParseEq p "1" (1, 0, Nothing, Nothing) @@ -1393,6 +1391,8 @@ assertParseError p "1..1" "" assertParseError p ".1," "" assertParseError p ",1." "" + assertParseEq p "1.555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555" (1.555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555, 255, Just '.', Nothing) + assertParseError p "1.5555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555" "" ,tests "spaceandamountormissingp" [ test "space and amount" $ assertParseEq spaceandamountormissingp " $47.18" (Mixed [usd 47.18]) diff -Nru haskell-hledger-lib-1.18.1/Hledger/Read/CsvReader.hs haskell-hledger-lib-1.19.1/Hledger/Read/CsvReader.hs --- haskell-hledger-lib-1.18.1/Hledger/Read/CsvReader.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Read/CsvReader.hs 2020-09-01 17:33:33.000000000 +0000 @@ -11,7 +11,6 @@ -- stack haddock hledger-lib --fast --no-haddock-deps --haddock-arguments='--ignore-all-exports' --open --- ** language -{-# LANGUAGE CPP #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE MultiWayIf #-} @@ -42,41 +41,39 @@ --- ** imports import Prelude () import "base-compat-batteries" Prelude.Compat hiding (fail) -import qualified "base-compat-batteries" Control.Monad.Fail.Compat as Fail (fail) +import Control.Applicative (liftA2) import Control.Exception (IOException, handle, throw) import Control.Monad (liftM, unless, when) import Control.Monad.Except (ExceptT, throwError) +import qualified Control.Monad.Fail as Fail import Control.Monad.IO.Class (MonadIO, liftIO) import Control.Monad.State.Strict (StateT, get, modify', evalStateT) import Control.Monad.Trans.Class (lift) -import Data.Char (toLower, isDigit, isSpace, ord) +import Data.Char (toLower, isDigit, isSpace, isAlphaNum, isAscii, ord) import Data.Bifunctor (first) import "base-compat-batteries" Data.List.Compat -import Data.Maybe -import Data.Ord +import qualified Data.List.Split as LS (splitOn) +import Data.Maybe (catMaybes, fromMaybe, isJust) +import Data.MemoUgly (memo) +import Data.Ord (comparing) import qualified Data.Set as S import Data.Text (Text) import qualified Data.Text as T import qualified Data.Text.Encoding as T import qualified Data.Text.IO as T import Data.Time.Calendar (Day) -#if MIN_VERSION_time(1,5,0) import Data.Time.Format (parseTimeM, defaultTimeLocale) -#else -import Data.Time.Format (parseTime) -import System.Locale (defaultTimeLocale) -#endif -import Safe +import Safe (atMay, headMay, lastMay, readDef, readMay) import System.Directory (doesFileExist) -import System.FilePath +import System.FilePath ((), takeDirectory, takeExtension, takeFileName) import qualified Data.Csv as Cassava import qualified Data.Csv.Parser.Megaparsec as CassavaMP import qualified Data.ByteString as B import qualified Data.ByteString.Lazy as BL -import Data.Foldable -import Text.Megaparsec hiding (parse) -import Text.Megaparsec.Char -import Text.Megaparsec.Custom +import Data.Foldable (asum, toList) +import Text.Megaparsec hiding (match, parse) +import Text.Megaparsec.Char (char, newline, string) +import Text.Megaparsec.Custom (customErrorBundlePretty, parseErrorAt) import Text.Printf (printf) import Hledger.Data @@ -100,7 +97,7 @@ {rFormat = "csv" ,rExtensions = ["csv","tsv","ssv"] ,rReadFn = parse - ,rParser = error' "sorry, CSV files can't be included yet" + ,rParser = error' "sorry, CSV files can't be included yet" -- PARTIAL: } -- | Parse and post-process a "Journal" from CSV data, or give an error. @@ -164,28 +161,31 @@ ," account2 assets:bank:savings\n" ] -addDirective :: (DirectiveName, String) -> CsvRules -> CsvRules +addDirective :: (DirectiveName, String) -> CsvRulesParsed -> CsvRulesParsed addDirective d r = r{rdirectives=d:rdirectives r} -addAssignment :: (HledgerFieldName, FieldTemplate) -> CsvRules -> CsvRules +addAssignment :: (HledgerFieldName, FieldTemplate) -> CsvRulesParsed -> CsvRulesParsed addAssignment a r = r{rassignments=a:rassignments r} -setIndexesAndAssignmentsFromList :: [CsvFieldName] -> CsvRules -> CsvRules +setIndexesAndAssignmentsFromList :: [CsvFieldName] -> CsvRulesParsed -> CsvRulesParsed setIndexesAndAssignmentsFromList fs r = addAssignmentsFromList fs . setCsvFieldIndexesFromList fs $ r -setCsvFieldIndexesFromList :: [CsvFieldName] -> CsvRules -> CsvRules +setCsvFieldIndexesFromList :: [CsvFieldName] -> CsvRulesParsed -> CsvRulesParsed setCsvFieldIndexesFromList fs r = r{rcsvfieldindexes=zip fs [1..]} -addAssignmentsFromList :: [CsvFieldName] -> CsvRules -> CsvRules +addAssignmentsFromList :: [CsvFieldName] -> CsvRulesParsed -> CsvRulesParsed addAssignmentsFromList fs r = foldl' maybeAddAssignment r journalfieldnames where maybeAddAssignment rules f = (maybe id addAssignmentFromIndex $ elemIndex f fs) rules where addAssignmentFromIndex i = addAssignment (f, "%"++show (i+1)) -addConditionalBlock :: ConditionalBlock -> CsvRules -> CsvRules +addConditionalBlock :: ConditionalBlock -> CsvRulesParsed -> CsvRulesParsed addConditionalBlock b r = r{rconditionalblocks=b:rconditionalblocks r} +addConditionalBlocks :: [ConditionalBlock] -> CsvRulesParsed -> CsvRulesParsed +addConditionalBlocks bs r = r{rconditionalblocks=bs++rconditionalblocks r} + getDirective :: DirectiveName -> CsvRules -> Maybe FieldTemplate getDirective directivename = lookup directivename . rdirectives @@ -236,18 +236,42 @@ -- | A set of data definitions and account-matching patterns sufficient to -- convert a particular CSV data file into meaningful journal transactions. -data CsvRules = CsvRules { +data CsvRules' a = CsvRules' { rdirectives :: [(DirectiveName,String)], -- ^ top-level rules, as (keyword, value) pairs rcsvfieldindexes :: [(CsvFieldName, CsvFieldIndex)], -- ^ csv field names and their column number, if declared by a fields list rassignments :: [(HledgerFieldName, FieldTemplate)], -- ^ top-level assignments to hledger fields, as (field name, value template) pairs - rconditionalblocks :: [ConditionalBlock] + rconditionalblocks :: [ConditionalBlock], -- ^ conditional blocks, which containing additional assignments/rules to apply to matched csv records -} deriving (Show, Eq) + rblocksassigning :: a -- (String -> [ConditionalBlock]) + -- ^ all conditional blocks which can potentially assign field with a given name (memoized) +} + +-- | Type used by parsers. Directives, assignments and conditional blocks +-- are in the reverse order compared to what is in the file and rblocksassigning is non-functional, +-- could not be used for processing CSV records yet +type CsvRulesParsed = CsvRules' () + +-- | Type used after parsing is done. Directives, assignments and conditional blocks +-- are in the same order as they were in the unput file and rblocksassigning is functional. +-- Ready to be used for CSV record processing +type CsvRules = CsvRules' (String -> [ConditionalBlock]) + +instance Eq CsvRules where + r1 == r2 = (rdirectives r1, rcsvfieldindexes r1, rassignments r1) == + (rdirectives r2, rcsvfieldindexes r2, rassignments r2) + +-- It is used for debug output only +instance Show CsvRules where + show r = "CsvRules { rdirectives=" ++ show (rdirectives r) ++ + ", rcsvfieldindexes=" ++ show (rcsvfieldindexes r) ++ + ", rassignments=" ++ show (rassignments r) ++ + ", rconditionalblocks="++ show (rconditionalblocks r) ++ + " }" -type CsvRulesParser a = StateT CsvRules SimpleTextParser a +type CsvRulesParser a = StateT CsvRulesParsed SimpleTextParser a -- | The keyword of a CSV rule - "fields", "skip", "if", etc. type DirectiveName = String @@ -272,13 +296,14 @@ -- | A strptime date parsing pattern, as supported by Data.Time.Format. type DateFormat = String --- | A regular expression. -type RegexpPattern = String +-- | A prefix for a matcher test, either & or none (implicit or). +data MatcherPrefix = And | None + deriving (Show, Eq) -- | A single test for matching a CSV record, in one way or another. data Matcher = - RecordMatcher RegexpPattern -- ^ match if this regexp matches the overall CSV record - | FieldMatcher CsvFieldReference RegexpPattern -- ^ match if this regexp matches the referenced CSV field's value + RecordMatcher MatcherPrefix Regexp -- ^ match if this regexp matches the overall CSV record + | FieldMatcher MatcherPrefix CsvFieldReference Regexp -- ^ match if this regexp matches the referenced CSV field's value deriving (Show, Eq) -- | A conditional block: a set of CSV record matchers, and a sequence @@ -293,12 +318,44 @@ ,cbAssignments :: [(HledgerFieldName, FieldTemplate)] } deriving (Show, Eq) -defrules = CsvRules { +defrules :: CsvRulesParsed +defrules = CsvRules' { rdirectives=[], rcsvfieldindexes=[], rassignments=[], - rconditionalblocks=[] -} + rconditionalblocks=[], + rblocksassigning = () + } + +-- | Create CsvRules from the content parsed out of the rules file +mkrules :: CsvRulesParsed -> CsvRules +mkrules rules = + let conditionalblocks = reverse $ rconditionalblocks rules + maybeMemo = if length conditionalblocks >= 15 then memo else id + in + CsvRules' { + rdirectives=reverse $ rdirectives rules, + rcsvfieldindexes=rcsvfieldindexes rules, + rassignments=reverse $ rassignments rules, + rconditionalblocks=conditionalblocks, + rblocksassigning = maybeMemo (\f -> filter (any ((==f).fst) . cbAssignments) conditionalblocks) + } + +matcherPrefix :: Matcher -> MatcherPrefix +matcherPrefix (RecordMatcher prefix _) = prefix +matcherPrefix (FieldMatcher prefix _ _) = prefix + +-- | Group matchers into associative pairs based on prefix, e.g.: +-- A +-- & B +-- C +-- D +-- & E +-- => [[A, B], [C], [D, E]] +groupedMatchers :: [Matcher] -> [[Matcher]] +groupedMatchers [] = [] +groupedMatchers (x:xs) = (x:ys) : groupedMatchers zs + where (ys, zs) = span (\y -> matcherPrefix y == And) xs --- *** rules parsers @@ -367,28 +424,28 @@ rulesp :: CsvRulesParser CsvRules rulesp = do - _ <- many $ choiceInState + _ <- many $ choice [blankorcommentlinep "blank or comment line" ,(directivep >>= modify' . addDirective) "directive" ,(fieldnamelistp >>= modify' . setIndexesAndAssignmentsFromList) "field name list" ,(fieldassignmentp >>= modify' . addAssignment) "field assignment" - ,(conditionalblockp >>= modify' . addConditionalBlock) "conditional block" + -- conditionalblockp backtracks because it shares "if" prefix with conditionaltablep. + ,try (conditionalblockp >>= modify' . addConditionalBlock) "conditional block" + -- 'reverse' is there to ensure that conditions are added in the order they listed in the file + ,(conditionaltablep >>= modify' . addConditionalBlocks . reverse) "conditional table" ] eof r <- get - return r{rdirectives=reverse $ rdirectives r - ,rassignments=reverse $ rassignments r - ,rconditionalblocks=reverse $ rconditionalblocks r - } + return $ mkrules r blankorcommentlinep :: CsvRulesParser () blankorcommentlinep = lift (dbgparse 8 "trying blankorcommentlinep") >> choiceInState [blanklinep, commentlinep] blanklinep :: CsvRulesParser () -blanklinep = lift (skipMany spacenonewline) >> newline >> return () "blank line" +blanklinep = lift skipNonNewlineSpaces >> newline >> return () "blank line" commentlinep :: CsvRulesParser () -commentlinep = lift (skipMany spacenonewline) >> commentcharp >> lift restofline >> return () "comment line" +commentlinep = lift skipNonNewlineSpaces >> commentcharp >> lift restofline >> return () "comment line" commentcharp :: CsvRulesParser Char commentcharp = oneOf (";#*" :: [Char]) @@ -398,7 +455,7 @@ lift $ dbgparse 8 "trying directive" d <- fmap T.unpack $ choiceInState $ map (lift . string . T.pack) directives v <- (((char ':' >> lift (many spacenonewline)) <|> lift (some spacenonewline)) >> directivevalp) - <|> (optional (char ':') >> lift (skipMany spacenonewline) >> lift eolof >> return "") + <|> (optional (char ':') >> lift skipNonNewlineSpaces >> lift eolof >> return "") return (d, v) ) "directive" @@ -421,8 +478,8 @@ lift $ dbgparse 8 "trying fieldnamelist" string "fields" optional $ char ':' - lift (skipSome spacenonewline) - let separator = lift (skipMany spacenonewline) >> char ',' >> lift (skipMany spacenonewline) + lift skipNonNewlineSpaces1 + let separator = lift skipNonNewlineSpaces >> char ',' >> lift skipNonNewlineSpaces f <- fromMaybe "" <$> optional fieldnamep fs <- some $ (separator >> fromMaybe "" <$> optional fieldnamep) lift restofline @@ -490,8 +547,8 @@ assignmentseparatorp :: CsvRulesParser () assignmentseparatorp = do lift $ dbgparse 8 "trying assignmentseparatorp" - _ <- choiceInState [ lift (skipMany spacenonewline) >> char ':' >> lift (skipMany spacenonewline) - , lift (skipSome spacenonewline) + _ <- choiceInState [ lift skipNonNewlineSpaces >> char ':' >> lift skipNonNewlineSpaces + , lift skipNonNewlineSpaces1 ] return () @@ -504,51 +561,92 @@ conditionalblockp :: CsvRulesParser ConditionalBlock conditionalblockp = do lift $ dbgparse 8 "trying conditionalblockp" - string "if" >> lift (skipMany spacenonewline) >> optional newline + -- "if\nMATCHER" or "if \nMATCHER" or "if MATCHER" + start <- getOffset + string "if" >> ( (newline >> return Nothing) + <|> (lift skipNonNewlineSpaces1 >> optional newline)) ms <- some matcherp - as <- many (try $ lift (skipSome spacenonewline) >> fieldassignmentp) + as <- catMaybes <$> + many (lift skipNonNewlineSpaces1 >> + choice [ lift eolof >> return Nothing + , fmap Just fieldassignmentp + ]) when (null as) $ - Fail.fail "start of conditional block found, but no assignment rules afterward\n(assignment rules in a conditional block should be indented)\n" + customFailure $ parseErrorAt start $ "start of conditional block found, but no assignment rules afterward\n(assignment rules in a conditional block should be indented)\n" return $ CB{cbMatchers=ms, cbAssignments=as} "conditional block" +-- A conditional table: "if" followed by separator, followed by some field names, +-- followed by many lines, each of which has: +-- one matchers, followed by field assignments (as many as there were fields) +conditionaltablep :: CsvRulesParser [ConditionalBlock] +conditionaltablep = do + lift $ dbgparse 8 "trying conditionaltablep" + start <- getOffset + string "if" + sep <- lift $ satisfy (\c -> not (isAlphaNum c || isSpace c)) + fields <- journalfieldnamep `sepBy1` (char sep) + newline + body <- flip manyTill (lift eolof) $ do + off <- getOffset + m <- matcherp' (char sep >> return ()) + vs <- LS.splitOn [sep] <$> lift restofline + if (length vs /= length fields) + then customFailure $ parseErrorAt off $ ((printf "line of conditional table should have %d values, but this one has only %d\n" (length fields) (length vs)) :: String) + else return (m,vs) + when (null body) $ + customFailure $ parseErrorAt start $ "start of conditional table found, but no assignment rules afterward\n" + return $ flip map body $ \(m,vs) -> + CB{cbMatchers=[m], cbAssignments=zip fields vs} + "conditional table" + -- A single matcher, on one line. +matcherp' :: CsvRulesParser () -> CsvRulesParser Matcher +matcherp' end = try (fieldmatcherp end) <|> recordmatcherp end + matcherp :: CsvRulesParser Matcher -matcherp = try fieldmatcherp <|> recordmatcherp +matcherp = matcherp' (lift eolof) -- A single whole-record matcher. -- A pattern on the whole line, not beginning with a csv field reference. -recordmatcherp :: CsvRulesParser Matcher -recordmatcherp = do - lift $ dbgparse 8 "trying matcherp" +recordmatcherp :: CsvRulesParser () -> CsvRulesParser Matcher +recordmatcherp end = do + lift $ dbgparse 8 "trying recordmatcherp" -- pos <- currentPos - -- _ <- optional (matchoperatorp >> lift (skipMany spacenonewline) >> optional newline) - r <- regexp + -- _ <- optional (matchoperatorp >> lift skipNonNewlineSpaces >> optional newline) + p <- matcherprefixp + r <- regexp end + return $ RecordMatcher p r -- when (null ps) $ -- Fail.fail "start of record matcher found, but no patterns afterward\n(patterns should not be indented)\n" - return $ RecordMatcher r "record matcher" -- | A single matcher for a specific field. A csv field reference -- (like %date or %1), and a pattern on the rest of the line, -- optionally space-separated. Eg: -- %description chez jacques -fieldmatcherp :: CsvRulesParser Matcher -fieldmatcherp = do +fieldmatcherp :: CsvRulesParser () -> CsvRulesParser Matcher +fieldmatcherp end = do lift $ dbgparse 8 "trying fieldmatcher" -- An optional fieldname (default: "all") -- f <- fromMaybe "all" `fmap` (optional $ do -- f' <- fieldnamep - -- lift (skipMany spacenonewline) + -- lift skipNonNewlineSpaces -- return f') - f <- csvfieldreferencep <* lift (skipMany spacenonewline) + p <- matcherprefixp + f <- csvfieldreferencep <* lift skipNonNewlineSpaces -- optional operator.. just ~ (case insensitive infix regex) for now -- _op <- fromMaybe "~" <$> optional matchoperatorp - lift (skipMany spacenonewline) - r <- regexp - return $ FieldMatcher f r + lift skipNonNewlineSpaces + r <- regexp end + return $ FieldMatcher p f r "field matcher" +matcherprefixp :: CsvRulesParser MatcherPrefix +matcherprefixp = do + lift $ dbgparse 8 "trying matcherprefixp" + (char '&' >> lift skipNonNewlineSpaces >> return And) <|> return None + csvfieldreferencep :: CsvRulesParser CsvFieldReference csvfieldreferencep = do lift $ dbgparse 8 "trying csvfieldreferencep" @@ -557,13 +655,15 @@ return $ '%' : quoteIfNeeded f -- A single regular expression -regexp :: CsvRulesParser RegexpPattern -regexp = do +regexp :: CsvRulesParser () -> CsvRulesParser Regexp +regexp end = do lift $ dbgparse 8 "trying regexp" -- notFollowedBy matchoperatorp c <- lift nonspace - cs <- anySingle `manyTill` lift eolof - return $ strip $ c:cs + cs <- anySingle `manyTill` end + case toRegexCI . strip $ c:cs of + Left x -> Fail.fail $ "CSV parser: " ++ x + Right x -> return x -- -- A match operator, indicating the type of match to perform. -- -- Currently just ~ meaning case insensitive infix regex match. @@ -606,12 +706,12 @@ rulestext <- if rulesfileexists then do - dbg7IO "using conversion rules file" rulesfile + dbg6IO "using conversion rules file" rulesfile readFilePortably rulesfile >>= expandIncludes (takeDirectory rulesfile) else return $ defaultRulesText rulesfile rules <- either throwerr return $ parseAndValidateCsvRules rulesfile rulestext - dbg7IO "rules" rules + dbg6IO "rules" rules -- parse the skip directive's value, if any let skiplines = case getDirective "skip" rules of @@ -620,15 +720,23 @@ Just s -> readDef (throwerr $ "could not parse skip value: " ++ show s) s -- parse csv - -- parsec seems to fail if you pass it "-" here TODO: try again with megaparsec - let parsecfilename = if csvfile == "-" then "(stdin)" else csvfile - let separator = fromMaybe ',' (getDirective "separator" rules >>= parseSeparator) - dbg7IO "separator" separator + let + -- parsec seems to fail if you pass it "-" here TODO: try again with megaparsec + parsecfilename = if csvfile == "-" then "(stdin)" else csvfile + separator = + case getDirective "separator" rules >>= parseSeparator of + Just c -> c + _ | ext == "ssv" -> ';' + _ | ext == "tsv" -> '\t' + _ -> ',' + where + ext = map toLower $ drop 1 $ takeExtension csvfile + dbg6IO "using separator" separator records <- (either throwerr id . - dbg8 "validateCsv" . validateCsv rules skiplines . - dbg8 "parseCsv") + dbg7 "validateCsv" . validateCsv rules skiplines . + dbg7 "parseCsv") `fmap` parseCsv separator parsecfilename csvdata - dbg7IO "first 3 csv records" $ take 3 records + dbg6IO "first 3 csv records" $ take 3 records -- identify header lines -- let (headerlines, datalines) = identifyHeaderLines records @@ -655,8 +763,8 @@ txns' = (if newestfirst || mdataseemsnewestfirst == Just True then reverse else id) txns where - newestfirst = dbg7 "newestfirst" $ isJust $ getDirective "newest-first" rules - mdataseemsnewestfirst = dbg7 "mdataseemsnewestfirst" $ + newestfirst = dbg6 "newestfirst" $ isJust $ getDirective "newest-first" rules + mdataseemsnewestfirst = dbg6 "mdataseemsnewestfirst" $ case nub $ map tdate txns of ds | length ds > 1 -> Just $ head ds > last ds _ -> Nothing @@ -727,10 +835,9 @@ Nothing -> r:(applyConditionalSkips rest) Just cnt -> applyConditionalSkips (drop (cnt-1) rest) validate [] = Right [] - validate rs@(_first:_) - | isJust lessthan2 = let r = fromJust lessthan2 in - Left $ printf "CSV record %s has less than two fields" (show r) - | otherwise = Right rs + validate rs@(_first:_) = case lessthan2 of + Just r -> Left $ printf "CSV record %s has less than two fields" (show r) + Nothing -> Right rs where lessthan2 = headMay $ filter ((<2).length) rs @@ -780,7 +887,7 @@ -- ruleval = csvRuleValue rules record :: DirectiveName -> Maybe String field = hledgerField rules record :: HledgerFieldName -> Maybe FieldTemplate fieldval = hledgerFieldValue rules record :: HledgerFieldName -> Maybe String - parsedate' = parseDateWithCustomOrDefaultFormats (rule "date-format") + parsedate = parseDateWithCustomOrDefaultFormats (rule "date-format") mkdateerror datefield datevalue mdateformat = unlines ["error: could not parse \""++datevalue++"\" as a date using date format " ++maybe "\"YYYY/M/D\", \"YYYY-M-D\" or \"YYYY.M.D\"" show mdateformat @@ -803,9 +910,10 @@ mdateformat = rule "date-format" date = fromMaybe "" $ fieldval "date" - date' = fromMaybe (error' $ mkdateerror "date" date mdateformat) $ parsedate' date + -- PARTIAL: + date' = fromMaybe (error' $ mkdateerror "date" date mdateformat) $ parsedate date mdate2 = fieldval "date2" - mdate2' = maybe Nothing (maybe (error' $ mkdateerror "date2" (fromMaybe "" mdate2) mdateformat) Just . parsedate') mdate2 + mdate2' = maybe Nothing (maybe (error' $ mkdateerror "date2" (fromMaybe "" mdate2) mdateformat) Just . parsedate) mdate2 status = case fieldval "status" of Nothing -> Unmarked @@ -905,7 +1013,7 @@ [] -> Nothing [(f,a)] | "-out" `isSuffixOf` f -> Just (-a) -- for -out fields, flip the sign [(_,a)] -> Just a - fs -> error' $ unlines $ [ + fs -> error' $ unlines $ [ -- PARTIAL: "multiple non-zero amounts or multiple zero amounts assigned," ,"please ensure just one. (https://hledger.org/csv.html#amount)" ," " ++ showRecord record @@ -923,7 +1031,7 @@ -- The CSV rules and record are provided for the error message. parseAmount :: CsvRules -> CsvRecord -> String -> String -> MixedAmount parseAmount rules record currency amountstr = - either mkerror (Mixed . (:[])) $ + either mkerror (Mixed . (:[])) $ -- PARTIAL: runParser (evalStateT (amountp <* eof) nulljournal) "" $ T.pack $ (currency++) $ simplifySign amountstr where @@ -981,7 +1089,7 @@ Just "==" -> nullassertion{batotal=True} Just "=*" -> nullassertion{bainclusive=True} Just "==*" -> nullassertion{batotal=True, bainclusive=True} - Just x -> error' $ unlines + Just x -> error' $ unlines -- PARTIAL: [ "balance-type \"" ++ x ++"\" is invalid. Use =, ==, =* or ==*." , showRecord record , showRules rules record @@ -1060,40 +1168,44 @@ getEffectiveAssignment rules record f = lastMay $ map snd $ assignments where -- all active assignments to field f, in order - assignments = dbg8 "assignments" $ filter ((==f).fst) $ toplevelassignments ++ conditionalassignments + assignments = dbg7 "assignments" $ filter ((==f).fst) $ toplevelassignments ++ conditionalassignments where -- all top level field assignments toplevelassignments = rassignments rules -- all field assignments in conditional blocks assigning to field f and active for the current csv record - conditionalassignments = concatMap cbAssignments $ filter isBlockActive $ blocksAssigning f + conditionalassignments = concatMap cbAssignments $ filter isBlockActive $ (rblocksassigning rules) f where - -- all conditional blocks which can potentially assign field f - blocksAssigning f = filter (any ((==f).fst) . cbAssignments) $ rconditionalblocks rules -- does this conditional block match the current csv record ? isBlockActive :: ConditionalBlock -> Bool - isBlockActive CB{..} = any matcherMatches cbMatchers + isBlockActive CB{..} = any (all matcherMatches) $ groupedMatchers cbMatchers where -- does this individual matcher match the current csv record ? matcherMatches :: Matcher -> Bool - matcherMatches (RecordMatcher pat) = regexMatchesCI pat' wholecsvline + matcherMatches (RecordMatcher _ pat) = regexMatch pat' wholecsvline where - pat' = dbg8 "regex" pat + pat' = dbg7 "regex" pat -- A synthetic whole CSV record to match against. Note, this can be -- different from the original CSV data: -- - any whitespace surrounding field values is preserved -- - any quotes enclosing field values are removed -- - and the field separator is always comma -- which means that a field containing a comma will look like two fields. - wholecsvline = dbg8 "wholecsvline" $ intercalate "," record - matcherMatches (FieldMatcher csvfieldref pat) = regexMatchesCI pat csvfieldvalue + wholecsvline = dbg7 "wholecsvline" $ intercalate "," record + matcherMatches (FieldMatcher _ csvfieldref pat) = regexMatch pat csvfieldvalue where -- the value of the referenced CSV field to match against. - csvfieldvalue = dbg8 "csvfieldvalue" $ replaceCsvFieldReference rules record csvfieldref + csvfieldvalue = dbg7 "csvfieldvalue" $ replaceCsvFieldReference rules record csvfieldref -- | Render a field assignment's template, possibly interpolating referenced -- CSV field values. Outer whitespace is removed from interpolated values. renderTemplate :: CsvRules -> CsvRecord -> FieldTemplate -> String -renderTemplate rules record t = regexReplaceBy "%[A-z0-9_-]+" (replaceCsvFieldReference rules record) t +renderTemplate rules record t = maybe t concat $ parseMaybe + (many $ takeWhile1P Nothing (/='%') + <|> replaceCsvFieldReference rules record <$> referencep) + t + where + referencep = liftA2 (:) (char '%') (takeWhile1P (Just "reference") isDescriptorChar) :: Parsec CustomErr String String + isDescriptorChar c = isAscii c && (isAlphaNum c || c == '_' || c == '-') -- | Replace something that looks like a reference to a csv field ("%date" or "%1) -- with that field's value. If it doesn't look like a field reference, or if we @@ -1117,13 +1229,7 @@ parseDateWithCustomOrDefaultFormats :: Maybe DateFormat -> String -> Maybe Day parseDateWithCustomOrDefaultFormats mformat s = asum $ map parsewith formats where - parsetime = -#if MIN_VERSION_time(1,5,0) - parseTimeM True -#else - parseTime -#endif - parsewith = flip (parsetime defaultTimeLocale) s + parsewith = flip (parseTimeM True defaultTimeLocale) s formats = maybe ["%Y/%-m/%-d" ,"%Y-%-m-%-d" @@ -1142,26 +1248,26 @@ tests_CsvReader = tests "CsvReader" [ tests "parseCsvRules" [ test "empty file" $ - parseCsvRules "unknown" "" @?= Right defrules + parseCsvRules "unknown" "" @?= Right (mkrules defrules) ] ,tests "rulesp" [ test "trailing comments" $ - parseWithState' defrules rulesp "skip\n# \n#\n" @?= Right defrules{rdirectives = [("skip","")]} + parseWithState' defrules rulesp "skip\n# \n#\n" @?= Right (mkrules $ defrules{rdirectives = [("skip","")]}) ,test "trailing blank lines" $ - parseWithState' defrules rulesp "skip\n\n \n" @?= (Right defrules{rdirectives = [("skip","")]}) + parseWithState' defrules rulesp "skip\n\n \n" @?= (Right (mkrules $ defrules{rdirectives = [("skip","")]})) ,test "no final newline" $ - parseWithState' defrules rulesp "skip" @?= (Right defrules{rdirectives=[("skip","")]}) + parseWithState' defrules rulesp "skip" @?= (Right (mkrules $ defrules{rdirectives=[("skip","")]})) ,test "assignment with empty value" $ parseWithState' defrules rulesp "account1 \nif foo\n account2 foo\n" @?= - (Right defrules{rassignments = [("account1","")], rconditionalblocks = [CB{cbMatchers=[RecordMatcher "foo"],cbAssignments=[("account2","foo")]}]}) + (Right (mkrules $ defrules{rassignments = [("account1","")], rconditionalblocks = [CB{cbMatchers=[RecordMatcher None (toRegex' "foo")],cbAssignments=[("account2","foo")]}]})) ] ,tests "conditionalblockp" [ test "space after conditional" $ -- #1120 parseWithState' defrules conditionalblockp "if a\n account2 b\n \n" @?= - (Right $ CB{cbMatchers=[RecordMatcher "a"],cbAssignments=[("account2","b")]}) + (Right $ CB{cbMatchers=[RecordMatcher None $ toRegexCI' "a"],cbAssignments=[("account2","b")]}) ,tests "csvfieldreferencep" [ test "number" $ parseWithState' defrules csvfieldreferencep "%1" @?= (Right "%1") @@ -1172,13 +1278,19 @@ ,tests "matcherp" [ test "recordmatcherp" $ - parseWithState' defrules matcherp "A A\n" @?= (Right $ RecordMatcher "A A") + parseWithState' defrules matcherp "A A\n" @?= (Right $ RecordMatcher None $ toRegexCI' "A A") + + ,test "recordmatcherp.starts-with-&" $ + parseWithState' defrules matcherp "& A A\n" @?= (Right $ RecordMatcher And $ toRegexCI' "A A") ,test "fieldmatcherp.starts-with-%" $ - parseWithState' defrules matcherp "description A A\n" @?= (Right $ RecordMatcher "description A A") + parseWithState' defrules matcherp "description A A\n" @?= (Right $ RecordMatcher None $ toRegexCI' "description A A") ,test "fieldmatcherp" $ - parseWithState' defrules matcherp "%description A A\n" @?= (Right $ FieldMatcher "%description" "A A") + parseWithState' defrules matcherp "%description A A\n" @?= (Right $ FieldMatcher None "%description" $ toRegexCI' "A A") + + ,test "fieldmatcherp.starts-with-&" $ + parseWithState' defrules matcherp "& %description A A\n" @?= (Right $ FieldMatcher And "%description" $ toRegexCI' "A A") -- ,test "fieldmatcherp with operator" $ -- parseWithState' defrules matcherp "%description ~ A A\n" @?= (Right $ FieldMatcher "%description" "A A") @@ -1186,13 +1298,25 @@ ] ,tests "getEffectiveAssignment" [ - let rules = defrules{rcsvfieldindexes=[("csvdate",1)],rassignments=[("date","%csvdate")]} - + let rules = mkrules $ defrules {rcsvfieldindexes=[("csvdate",1)],rassignments=[("date","%csvdate")]} + in test "toplevel" $ getEffectiveAssignment rules ["a","b"] "date" @?= (Just "%csvdate") - ,let rules = defrules{rcsvfieldindexes=[("csvdate",1)], rconditionalblocks=[CB [FieldMatcher "%csvdate" "a"] [("date","%csvdate")]]} + ,let rules = mkrules $ defrules{rcsvfieldindexes=[("csvdate",1)], rconditionalblocks=[CB [FieldMatcher None "%csvdate" $ toRegex' "a"] [("date","%csvdate")]]} in test "conditional" $ getEffectiveAssignment rules ["a","b"] "date" @?= (Just "%csvdate") - + + ,let rules = mkrules $ defrules{rcsvfieldindexes=[("csvdate",1),("description",2)], rconditionalblocks=[CB [FieldMatcher None "%csvdate" $ toRegex' "a", FieldMatcher None "%description" $ toRegex' "b"] [("date","%csvdate")]]} + in test "conditional-with-or-a" $ getEffectiveAssignment rules ["a"] "date" @?= (Just "%csvdate") + + ,let rules = mkrules $ defrules{rcsvfieldindexes=[("csvdate",1),("description",2)], rconditionalblocks=[CB [FieldMatcher None "%csvdate" $ toRegex' "a", FieldMatcher None "%description" $ toRegex' "b"] [("date","%csvdate")]]} + in test "conditional-with-or-b" $ getEffectiveAssignment rules ["_", "b"] "date" @?= (Just "%csvdate") + + ,let rules = mkrules $ defrules{rcsvfieldindexes=[("csvdate",1),("description",2)], rconditionalblocks=[CB [FieldMatcher None "%csvdate" $ toRegex' "a", FieldMatcher And "%description" $ toRegex' "b"] [("date","%csvdate")]]} + in test "conditional.with-and" $ getEffectiveAssignment rules ["a", "b"] "date" @?= (Just "%csvdate") + + ,let rules = mkrules $ defrules{rcsvfieldindexes=[("csvdate",1),("description",2)], rconditionalblocks=[CB [FieldMatcher None "%csvdate" $ toRegex' "a", FieldMatcher And "%description" $ toRegex' "b", FieldMatcher None "%description" $ toRegex' "c"] [("date","%csvdate")]]} + in test "conditional.with-and-or" $ getEffectiveAssignment rules ["_", "c"] "date" @?= (Just "%csvdate") + ] ] diff -Nru haskell-hledger-lib-1.18.1/Hledger/Read/JournalReader.hs haskell-hledger-lib-1.19.1/Hledger/Read/JournalReader.hs --- haskell-hledger-lib-1.18.1/Hledger/Read/JournalReader.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Read/JournalReader.hs 2020-09-02 03:10:45.000000000 +0000 @@ -247,7 +247,7 @@ includedirectivep :: MonadIO m => ErroringJournalParser m () includedirectivep = do string "include" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 prefixedglob <- T.unpack <$> takeWhileP Nothing (/= '\n') -- don't consume newline yet parentoff <- getOffset parentpos <- getSourcePos @@ -296,7 +296,7 @@ -- on journal. Duplicating readJournal a bit here. let r = fromMaybe reader $ findReader Nothing (Just prefixedpath) parser = rParser r - dbg7IO "trying reader" (rFormat r) + dbg6IO "trying reader" (rFormat r) updatedChildj <- journalAddFile (filepath, childInput) <$> parseIncludeFile parser initChildj filepath childInput @@ -331,7 +331,7 @@ off <- getOffset -- XXX figure out a more precise position later string "account" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 -- the account name, possibly modified by preceding alias or apply account directives acct <- modifiedaccountnamep @@ -339,7 +339,7 @@ -- maybe an account type code (ALERX) after two or more spaces -- XXX added in 1.11, deprecated in 1.13, remove in 1.14 mtypecode :: Maybe Char <- lift $ optional $ try $ do - skipSome spacenonewline -- at least one more space in addition to the one consumed by modifiedaccountp + skipNonNewlineSpaces1 -- at least one more space in addition to the one consumed by modifiedaccountp choice $ map char "ALERX" -- maybe a comment, on this and/or following lines @@ -380,10 +380,12 @@ "r" -> Right Revenue "expense" -> Right Expense "x" -> Right Expense + "cash" -> Right Cash + "c" -> Right Cash _ -> Left err where err = "invalid account type code "++T.unpack s++", should be one of " ++ - (intercalate ", " $ ["A","L","E","R","X","ASSET","LIABILITY","EQUITY","REVENUE","EXPENSE"]) + (intercalate ", " $ ["A","L","E","R","X","C","Asset","Liability","Equity","Revenue","Expense","Cash"]) -- Add an account declaration to the journal, auto-numbering it. addAccountDeclaration :: (AccountName,Text,[Tag]) -> JournalParser m () @@ -400,7 +402,7 @@ j{jdeclaredaccounts = d:decls}) indentedlinep :: JournalParser m String -indentedlinep = lift (skipSome spacenonewline) >> (rstrip <$> lift restofline) +indentedlinep = lift skipNonNewlineSpaces1 >> (rstrip <$> lift restofline) -- | Parse a one-line or multi-line commodity directive. -- @@ -419,13 +421,13 @@ commoditydirectiveonelinep = do (off, Amount{acommodity,astyle}) <- try $ do string "commodity" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 off <- getOffset amount <- amountp pure $ (off, amount) - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces _ <- lift followingcommentp - let comm = Commodity{csymbol=acommodity, cformat=Just $ dbg7 "style from commodity directive" astyle} + let comm = Commodity{csymbol=acommodity, cformat=Just $ dbg6 "style from commodity directive" astyle} if asdecimalpoint astyle == Nothing then customFailure $ parseErrorAt off pleaseincludedecimalpoint else modify' (\j -> j{jcommodities=M.insert acommodity comm $ jcommodities j}) @@ -447,21 +449,21 @@ commoditydirectivemultilinep :: JournalParser m () commoditydirectivemultilinep = do string "commodity" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 sym <- lift commoditysymbolp _ <- lift followingcommentp mformat <- lastMay <$> many (indented $ formatdirectivep sym) let comm = Commodity{csymbol=sym, cformat=mformat} modify' (\j -> j{jcommodities=M.insert sym comm $ jcommodities j}) where - indented = (lift (skipSome spacenonewline) >>) + indented = (lift skipNonNewlineSpaces1 >>) -- | Parse a format (sub)directive, throwing a parse error if its -- symbol does not match the one given. formatdirectivep :: CommoditySymbol -> JournalParser m AmountStyle formatdirectivep expectedsym = do string "format" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 off <- getOffset Amount{acommodity,astyle} <- amountp _ <- lift followingcommentp @@ -469,7 +471,7 @@ then if asdecimalpoint astyle == Nothing then customFailure $ parseErrorAt off pleaseincludedecimalpoint - else return $ dbg7 "style from format subdirective" astyle + else return $ dbg6 "style from format subdirective" astyle else customFailure $ parseErrorAt off $ printf "commodity directive symbol \"%s\" and format directive symbol \"%s\" should be the same" expectedsym acommodity @@ -477,7 +479,7 @@ keywordp = (() <$) . string . fromString spacesp :: JournalParser m () -spacesp = () <$ lift (skipSome spacenonewline) +spacesp = () <$ lift skipNonNewlineSpaces1 -- | Backtracking parser similar to string, but allows varying amount of space between words keywordsp :: String -> JournalParser m () @@ -486,7 +488,7 @@ applyaccountdirectivep :: JournalParser m () applyaccountdirectivep = do keywordsp "apply account" "apply account directive" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 parent <- lift accountnamep newline pushParentAccount parent @@ -499,7 +501,7 @@ aliasdirectivep :: JournalParser m () aliasdirectivep = do string "alias" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 alias <- lift accountaliasp addAccountAlias alias @@ -511,7 +513,7 @@ -- dbgparse 0 "basicaliasp" old <- rstrip <$> (some $ noneOf ("=" :: [Char])) char '=' - skipMany spacenonewline + skipNonNewlineSpaces new <- rstrip <$> anySingle `manyTill` eolof -- eol in journal, eof in command lines, normally return $ BasicAlias (T.pack old) (T.pack new) @@ -519,13 +521,17 @@ regexaliasp = do -- dbgparse 0 "regexaliasp" char '/' + off1 <- getOffset re <- some $ noneOf ("/\n\r" :: [Char]) -- paranoid: don't try to read past line end + off2 <- getOffset char '/' - skipMany spacenonewline + skipNonNewlineSpaces char '=' - skipMany spacenonewline + skipNonNewlineSpaces repl <- anySingle `manyTill` eolof - return $ RegexAlias re repl + case toRegexCI re of + Right r -> return $! RegexAlias r repl + Left e -> customFailure $! parseErrorAtRegion off1 off2 e endaliasesdirectivep :: JournalParser m () endaliasesdirectivep = do @@ -535,7 +541,7 @@ tagdirectivep :: JournalParser m () tagdirectivep = do string "tag" "tag directive" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 _ <- lift $ some nonspace lift restofline return () @@ -549,16 +555,13 @@ defaultyeardirectivep :: JournalParser m () defaultyeardirectivep = do char 'Y' "default year" - lift (skipMany spacenonewline) - y <- some digitChar - let y' = read y - failIfInvalidYear y - setYear y' + lift skipNonNewlineSpaces + setYear =<< lift yearp defaultcommoditydirectivep :: JournalParser m () defaultcommoditydirectivep = do char 'D' "default commodity" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 off <- getOffset Amount{acommodity,astyle} <- amountp lift restofline @@ -569,11 +572,11 @@ marketpricedirectivep :: JournalParser m PriceDirective marketpricedirectivep = do char 'P' "market price" - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces date <- try (do {LocalTime d _ <- datetimep; return d}) <|> datep -- a time is ignored - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 symbol <- lift commoditysymbolp - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces price <- amountp lift restofline return $ PriceDirective date symbol price @@ -581,7 +584,7 @@ ignoredpricecommoditydirectivep :: JournalParser m () ignoredpricecommoditydirectivep = do char 'N' "ignored-price commodity" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 lift commoditysymbolp lift restofline return () @@ -589,11 +592,11 @@ commodityconversiondirectivep :: JournalParser m () commodityconversiondirectivep = do char 'C' "commodity conversion" - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 amountp - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces char '=' - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces amountp lift restofline return () @@ -604,7 +607,7 @@ transactionmodifierp :: JournalParser m TransactionModifier transactionmodifierp = do char '=' "modifier transaction" - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces querytxt <- lift $ T.strip <$> descriptionp (_comment, _tags) <- lift transactioncommentp -- TODO apply these to modified txns ? postings <- postingsp Nothing @@ -624,7 +627,7 @@ -- first line char '~' "periodic transaction" - lift $ skipMany spacenonewline + lift $ skipNonNewlineSpaces -- a period expression off <- getOffset @@ -704,7 +707,7 @@ -- linebeginningwithspaces :: JournalParser m String -- linebeginningwithspaces = do --- sp <- lift (skipSome spacenonewline) +-- sp <- lift skipNonNewlineSpaces1 -- c <- nonspace -- cs <- lift restofline -- return $ sp ++ (c:cs) ++ "\n" @@ -713,17 +716,17 @@ postingp mTransactionYear = do -- lift $ dbgparse 0 "postingp" (status, account) <- try $ do - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 status <- lift statusp - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces account <- modifiedaccountnamep return (status, account) let (ptype, account') = (accountNamePostingType account, textUnbracket account) - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces amount <- option missingmixedamt $ Mixed . (:[]) <$> amountp - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces massertion <- optional balanceassertionp - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces (comment,tags,mdate,mdate2) <- lift $ postingcommentp mTransactionYear return posting { pdate=mdate @@ -776,7 +779,7 @@ bad "2011/1/1 00:00:60" bad "2011/1/1 3:5:7" -- timezone is parsed but ignored - let t = LocalTime (fromGregorian 2018 1 1) (TimeOfDay 0 0 (fromIntegral 0)) + let t = LocalTime (fromGregorian 2018 1 1) (TimeOfDay 0 0 0) assertParseEq datetimep "2018/1/1 00:00-0800" t assertParseEq datetimep "2018/1/1 00:00+1234" t @@ -995,7 +998,7 @@ ,tests "defaultyeardirectivep" [ test "1000" $ assertParse defaultyeardirectivep "Y 1000" -- XXX no \n like the others - ,test "999" $ assertParseError defaultyeardirectivep "Y 999" "bad year number" + -- ,test "999" $ assertParseError defaultyeardirectivep "Y 999" "bad year number" ,test "12345" $ assertParse defaultyeardirectivep "Y 12345" ] diff -Nru haskell-hledger-lib-1.18.1/Hledger/Read/TimeclockReader.hs haskell-hledger-lib-1.19.1/Hledger/Read/TimeclockReader.hs --- haskell-hledger-lib-1.18.1/Hledger/Read/TimeclockReader.hs 2020-04-21 00:41:57.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Read/TimeclockReader.hs 2020-08-29 21:29:10.000000000 +0000 @@ -121,10 +121,10 @@ timeclockentryp = do sourcepos <- genericSourcePos <$> lift getSourcePos code <- oneOf ("bhioO" :: [Char]) - lift (skipSome spacenonewline) + lift skipNonNewlineSpaces1 datetime <- datetimep - account <- fromMaybe "" <$> optional (lift (skipSome spacenonewline) >> modifiedaccountnamep) - description <- T.pack . fromMaybe "" <$> lift (optional (skipSome spacenonewline >> restofline)) + account <- fromMaybe "" <$> optional (lift skipNonNewlineSpaces1 >> modifiedaccountnamep) + description <- T.pack . fromMaybe "" <$> lift (optional (skipNonNewlineSpaces1 >> restofline)) return $ TimeclockEntry sourcepos (read [code]) datetime account description diff -Nru haskell-hledger-lib-1.18.1/Hledger/Read/TimedotReader.hs haskell-hledger-lib-1.19.1/Hledger/Read/TimedotReader.hs --- haskell-hledger-lib-1.18.1/Hledger/Read/TimedotReader.hs 2020-04-21 00:41:57.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Read/TimedotReader.hs 2020-08-31 23:04:28.000000000 +0000 @@ -159,7 +159,7 @@ orgheadingprefixp = do -- traceparse "orgheadingprefixp" - skipSome (char '*') >> skipSome spacenonewline + skipSome (char '*') >> skipNonNewlineSpaces1 -- | Parse a single timedot entry to one (dateless) transaction. -- @ @@ -170,9 +170,9 @@ lift $ traceparse "entryp" pos <- genericSourcePos <$> getSourcePos notFollowedBy datelinep - lift $ optional $ choice [orgheadingprefixp, skipSome spacenonewline] + lift $ optional $ choice [orgheadingprefixp, skipNonNewlineSpaces1] a <- modifiedaccountnamep - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces hours <- try (lift followingcommentp >> return 0) <|> (durationp <* @@ -182,7 +182,7 @@ tstatus = Cleared, tpostings = [ nullposting{paccount=a - ,pamount=Mixed [setAmountPrecision 2 $ num hours] -- don't assume hours; do set precision to 2 + ,pamount=Mixed [setAmountPrecision (Precision 2) $ num hours] -- don't assume hours; do set precision to 2 ,ptype=VirtualPosting ,ptransaction=Just t } @@ -211,7 +211,7 @@ -- lift $ traceparse "numericquantityp" (q, _, _, _) <- lift $ numberp Nothing msymbol <- optional $ choice $ map (string . fst) timeUnits - lift (skipMany spacenonewline) + lift skipNonNewlineSpaces let q' = case msymbol of Nothing -> q @@ -240,7 +240,7 @@ dotquantityp = do -- lift $ traceparse "dotquantityp" dots <- filter (not.isSpace) <$> many (oneOf (". " :: [Char])) - return $ (/4) $ fromIntegral $ length dots + return $ fromIntegral (length dots) / 4 -- | XXX new comment line parser, move to Hledger.Read.Common.emptyorcommentlinep -- Parse empty lines, all-blank lines, and lines beginning with any of the provided @@ -249,7 +249,7 @@ emptyorcommentlinep cs = label ("empty line or comment line beginning with "++cs) $ do traceparse "emptyorcommentlinep" -- XXX possible to combine label and traceparse ? - skipMany spacenonewline + skipNonNewlineSpaces void newline <|> void commentp traceparse' "emptyorcommentlinep" where diff -Nru haskell-hledger-lib-1.18.1/Hledger/Read.hs haskell-hledger-lib-1.19.1/Hledger/Read.hs --- haskell-hledger-lib-1.18.1/Hledger/Read.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Read.hs 2020-08-31 23:04:28.000000000 +0000 @@ -67,7 +67,7 @@ import System.IO (stderr, writeFile) import Text.Printf (hPrintf, printf) -import Hledger.Data.Dates (getCurrentDay, parsedate, showDate) +import Hledger.Data.Dates (getCurrentDay, parsedateM, showDate) import Hledger.Data.Types import Hledger.Read.Common import Hledger.Read.JournalReader as JournalReader @@ -90,7 +90,7 @@ -- | Read a Journal from the given text, assuming journal format; or -- throw an error. readJournal' :: Text -> IO Journal -readJournal' t = readJournal def Nothing t >>= either error' return +readJournal' t = readJournal def Nothing t >>= either error' return -- PARTIAL: -- | @readJournal iopts mfile txt@ -- @@ -111,12 +111,12 @@ readJournal iopts mpath txt = do let r :: Reader IO = fromMaybe JournalReader.reader $ findReader (mformat_ iopts) mpath - dbg7IO "trying reader" (rFormat r) + dbg6IO "trying reader" (rFormat r) (runExceptT . (rReadFn r) iopts (fromMaybe "(string)" mpath)) txt -- | Read the default journal file specified by the environment, or raise an error. defaultJournal :: IO Journal -defaultJournal = defaultJournalPath >>= readJournalFile def >>= either error' return +defaultJournal = defaultJournalPath >>= readJournalFile def >>= either error' return -- PARTIAL: -- | Get the default journal file path specified by the environment. -- Like ledger, we look first for the LEDGER_FILE environment @@ -251,9 +251,11 @@ previousLatestDates :: FilePath -> IO LatestDates previousLatestDates f = do let latestfile = latestDatesFileFor f + parsedate s = maybe (fail $ "could not parse date \"" ++ s ++ "\"") return $ + parsedateM s exists <- doesFileExist latestfile if exists - then map (parsedate . strip) . lines . strip . T.unpack <$> readFileStrictly latestfile + then traverse (parsedate . T.unpack . T.strip) . T.lines =<< readFileStrictly latestfile else return [] -- | Where to save latest transaction dates for the given file path. diff -Nru haskell-hledger-lib-1.18.1/Hledger/Reports/AccountTransactionsReport.hs haskell-hledger-lib-1.19.1/Hledger/Reports/AccountTransactionsReport.hs --- haskell-hledger-lib-1.18.1/Hledger/Reports/AccountTransactionsReport.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Reports/AccountTransactionsReport.hs 2020-09-01 17:33:33.000000000 +0000 @@ -1,4 +1,4 @@ -{-# LANGUAGE OverloadedStrings, RecordWildCards, DeriveDataTypeable, FlexibleInstances #-} +{-# LANGUAGE OverloadedStrings, RecordWildCards, FlexibleInstances #-} {-| An account-centric transactions report. @@ -29,9 +29,10 @@ -- | An account transactions report represents transactions affecting -- a particular account (or possibly several accounts, but we don't --- use that). It is used eg by hledger-ui's and hledger-web's account --- register view, where we want to show one row per transaction, in --- the context of the current account. Report items consist of: +-- use that). It is used eg by hledger-ui's and hledger-web's register +-- view, and hledger's aregister report, where we want to show one row +-- per transaction, in the context of the current account. Report +-- items consist of: -- -- - the transaction, unmodified -- @@ -84,49 +85,62 @@ accountTransactionsReport :: ReportOpts -> Journal -> Query -> Query -> AccountTransactionsReport accountTransactionsReport ropts j reportq thisacctq = (label, items) where - -- a depth limit does not affect the account transactions report + -- a depth limit should not affect the account transactions report -- seems unnecessary for some reason XXX reportq' = -- filterQuery (not . queryIsDepth) reportq -- get all transactions - ts1 = jtxns j + ts1 = + -- ptraceAtWith 5 (("ts1:\n"++).pshowTransactions) $ + jtxns j -- apply any cur:SYM filters in reportq' symq = filterQuery queryIsSym reportq' - ts2 = (if queryIsNull symq then id else map (filterTransactionAmounts symq)) ts1 + ts2 = + ptraceAtWith 5 (("ts2:\n"++).pshowTransactions) $ + (if queryIsNull symq then id else map (filterTransactionAmounts symq)) ts1 -- keep just the transactions affecting this account (via possibly realness or status-filtered postings) realq = filterQuery queryIsReal reportq' statusq = filterQuery queryIsStatus reportq' - ts3 = filter (matchesTransaction thisacctq . filterTransactionPostings (And [realq, statusq])) ts2 + ts3 = + traceAt 3 ("thisacctq: "++show thisacctq) $ + ptraceAtWith 5 (("ts3:\n"++).pshowTransactions) $ + filter (matchesTransaction thisacctq . filterTransactionPostings (And [realq, statusq])) ts2 -- maybe convert these transactions to cost or value + -- PARTIAL: prices = journalPriceOracle (infer_value_ ropts) j styles = journalCommodityStyles j periodlast = fromMaybe (error' "journalApplyValuation: expected a non-empty journal") $ -- XXX shouldn't happen reportPeriodOrJournalLastDay ropts j mreportlast = reportPeriodLastDay ropts - today = fromMaybe (error' "journalApplyValuation: could not pick a valuation date, ReportOpts today_ is unset") $ today_ ropts + today = fromMaybe (error' "journalApplyValuation: could not pick a valuation date, ReportOpts today_ is unset") $ today_ ropts -- XXX shouldn't happen multiperiod = interval_ ropts /= NoInterval tval = case value_ ropts of Just v -> \t -> transactionApplyValuation prices styles periodlast mreportlast today multiperiod t v Nothing -> id - ts4 = map tval ts3 + ts4 = + ptraceAtWith 5 (("ts4:\n"++).pshowTransactions) $ + map tval ts3 -- sort by the transaction's register date, for accurate starting balance - ts = sortBy (comparing (transactionRegisterDate reportq' thisacctq)) ts4 + -- these are not yet filtered by tdate, we want to search them all for priorps + ts5 = + ptraceAtWith 5 (("ts5:\n"++).pshowTransactions) $ + sortBy (comparing (transactionRegisterDate reportq' thisacctq)) ts4 (startbal,label) | balancetype_ ropts == HistoricalBalance = (sumPostings priorps, balancelabel) - | otherwise = (nullmixedamt, totallabel) + | otherwise = (nullmixedamt, totallabel) where - priorps = dbg1 "priorps" $ + priorps = dbg5 "priorps" $ filter (matchesPosting - (dbg1 "priorq" $ + (dbg5 "priorq" $ And [thisacctq, tostartdateq, datelessreportq])) - $ transactionsPostings ts + $ transactionsPostings ts5 tostartdateq = case mstartdate of Just _ -> Date (DateSpan Nothing mstartdate) @@ -134,8 +148,19 @@ mstartdate = queryStartDate (date2_ ropts) reportq' datelessreportq = filterQuery (not . queryIsDateOrDate2) reportq' + -- accountTransactionsReportItem will keep transactions of any date which have any posting inside the report period. + -- Should we also require that transaction date is inside the report period ? + -- Should we be filtering by reportq here to apply other query terms (?) + -- Make it an option for now. + filtertxns = txn_dates_ ropts + items = reverse $ - accountTransactionsReportItems reportq' thisacctq startbal negate ts + accountTransactionsReportItems reportq' thisacctq startbal negate $ + (if filtertxns then filter (reportq' `matchesTransaction`) else id) $ + ts5 + +pshowTransactions :: [Transaction] -> String +pshowTransactions = pshow . map (\t -> unwords [show $ tdate t, T.unpack $ tdescription t]) -- | Generate transactions report items from a list of transactions, -- using the provided user-specified report query, a query specifying diff -Nru haskell-hledger-lib-1.18.1/Hledger/Reports/BalanceReport.hs haskell-hledger-lib-1.19.1/Hledger/Reports/BalanceReport.hs --- haskell-hledger-lib-1.18.1/Hledger/Reports/BalanceReport.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Reports/BalanceReport.hs 2020-08-31 23:04:28.000000000 +0000 @@ -11,25 +11,21 @@ BalanceReportItem, balanceReport, flatShowsExclusiveBalance, - sortAccountItemsLike, - unifyMixedAmount, - perdivide, -- * Tests tests_BalanceReport ) where -import Data.List -import Data.Ord -import Data.Maybe import Data.Time.Calendar import Hledger.Data import Hledger.Read (mamountp') import Hledger.Query import Hledger.Utils +import Hledger.Reports.MultiBalanceReport (multiBalanceReportWith) import Hledger.Reports.ReportOptions +import Hledger.Reports.ReportTypes -- | A simple balance report. It has: @@ -63,169 +59,19 @@ -- | Generate a simple balance report, containing the matched accounts and -- their balances (change of balance) during the specified period. --- This is like PeriodChangeReport with a single column (but more mature, --- eg this can do hierarchical display). +-- If the normalbalance_ option is set, it adjusts the sorting and sign of +-- amounts (see ReportOpts and CompoundBalanceCommand). balanceReport :: ReportOpts -> Query -> Journal -> BalanceReport -balanceReport ropts@ReportOpts{..} q j = - (if invert_ then brNegate else id) $ - (mappedsorteditems, mappedtotal) - where - -- dbg = const id -- exclude from debug output - dbg s = let p = "balanceReport" in Hledger.Utils.dbg4 (p++" "++s) -- add prefix in debug output - dbg' s = let p = "balanceReport" in Hledger.Utils.dbg5 (p++" "++s) -- add prefix in debug output - - -- Get all the summed accounts & balances, according to the query, as an account tree. - -- If doing cost valuation, amounts will be converted to cost first. - accttree = ledgerRootAccount $ ledgerFromJournal q $ journalSelectingAmountFromOpts ropts j - - -- For other kinds of valuation, convert the summed amounts to value, - -- per hledger_options.m4.md "Effect of --value on reports". - valuedaccttree = mapAccounts avalue accttree - where - avalue a@Account{..} = a{aebalance=maybevalue aebalance, aibalance=maybevalue aibalance} - where - maybevalue = maybe id applyvaluation value_ - where - applyvaluation = mixedAmountApplyValuation priceoracle styles periodlast mreportlast today multiperiod - where - priceoracle = journalPriceOracle infer_value_ j - styles = journalCommodityStyles j - periodlast = fromMaybe - (error' "balanceReport: expected a non-empty journal") $ -- XXX shouldn't happen - reportPeriodOrJournalLastDay ropts j - mreportlast = reportPeriodLastDay ropts - today = fromMaybe (error' "balanceReport: could not pick a valuation date, ReportOpts today_ is unset") today_ - multiperiod = interval_ /= NoInterval - - -- Modify this tree for display - depth limit, boring parents, zeroes - and convert to a list. - displayaccts :: [Account] - | queryDepth q == 0 = - dbg' "displayaccts" $ - take 1 $ clipAccountsAndAggregate (queryDepth q) $ flattenAccounts valuedaccttree - | flat_ ropts = dbg' "displayaccts" $ - filterzeros $ - filterempty $ - drop 1 $ clipAccountsAndAggregate (queryDepth q) $ flattenAccounts valuedaccttree - | otherwise = dbg' "displayaccts" $ - filter (not.aboring) $ - drop 1 $ flattenAccounts $ - markboring $ - prunezeros $ - sortAccountTreeByAmount (fromMaybe NormallyPositive normalbalance_) $ - clipAccounts (queryDepth q) valuedaccttree - where - balance = if flat_ ropts then aebalance else aibalance - filterzeros = if empty_ then id else filter (not . mixedAmountLooksZero . balance) - filterempty = filter (\a -> anumpostings a > 0 || not (mixedAmountLooksZero (balance a))) - prunezeros = if empty_ then id else fromMaybe nullacct . pruneAccounts (mixedAmountLooksZero . balance) - markboring = if no_elide_ then id else markBoringParentAccounts - - -- Make a report row for each account. - items = dbg "items" $ map (balanceReportItem ropts q) displayaccts - - -- Sort report rows (except sorting by amount in tree mode, which was done above). - sorteditems - | sort_amount_ && tree_ ropts = items - | sort_amount_ = sortFlatBRByAmount items - | otherwise = sortBRByAccountDeclaration items - where - -- Sort the report rows, representing a flat account list, by row total. - sortFlatBRByAmount :: [BalanceReportItem] -> [BalanceReportItem] - sortFlatBRByAmount = sortBy (maybeflip $ comparing (normaliseMixedAmountSquashPricesForDisplay . fourth4)) - where - maybeflip = if normalbalance_ == Just NormallyNegative then id else flip - -- Sort the report rows by account declaration order then account name. - sortBRByAccountDeclaration :: [BalanceReportItem] -> [BalanceReportItem] - sortBRByAccountDeclaration rows = sortedrows - where - anamesandrows = [(first4 r, r) | r <- rows] - anames = map fst anamesandrows - sortedanames = sortAccountNamesByDeclaration j (tree_ ropts) anames - sortedrows = sortAccountItemsLike sortedanames anamesandrows - - -- Calculate the grand total. - total | not (flat_ ropts) = dbg "total" $ sum [amt | (_,_,indent,amt) <- items, indent == 0] - | otherwise = dbg "total" $ - if flatShowsExclusiveBalance - then sum $ map fourth4 items - else sum $ map aebalance $ clipAccountsAndAggregate 1 displayaccts - - -- Calculate percentages if needed. - mappedtotal | percent_ = dbg "mappedtotal" $ total `perdivide` total - | otherwise = total - mappedsorteditems | percent_ = - dbg "mappedsorteditems" $ - map (\(fname, sname, indent, amount) -> (fname, sname, indent, amount `perdivide` total)) sorteditems - | otherwise = sorteditems - --- | A sorting helper: sort a list of things (eg report rows) keyed by account name --- to match the provided ordering of those same account names. -sortAccountItemsLike :: [AccountName] -> [(AccountName, b)] -> [b] -sortAccountItemsLike sortedas items = - concatMap (\a -> maybe [] (:[]) $ lookup a items) sortedas - --- | In an account tree with zero-balance leaves removed, mark the --- elidable parent accounts (those with one subaccount and no balance --- of their own). -markBoringParentAccounts :: Account -> Account -markBoringParentAccounts = tieAccountParents . mapAccounts mark +balanceReport ropts q j = (rows, total) where - mark a | length (asubs a) == 1 && mixedAmountLooksZero (aebalance a) = a{aboring=True} - | otherwise = a + report = multiBalanceReportWith ropts q j (journalPriceOracle (infer_value_ ropts) j) + rows = [( prrFullName row + , prrDisplayName row + , prrDepth row - 1 -- BalanceReport uses 0-based account depths + , prrTotal row + ) | row <- prRows report] + total = prrTotal $ prTotals report -balanceReportItem :: ReportOpts -> Query -> Account -> BalanceReportItem -balanceReportItem opts q a - | flat_ opts = (name, name, 0, (if flatShowsExclusiveBalance then aebalance else aibalance) a) - | otherwise = (name, elidedname, indent, aibalance a) - where - name | queryDepth q > 0 = aname a - | otherwise = "..." - elidedname = accountNameFromComponents (adjacentboringparentnames ++ [accountLeafName name]) - adjacentboringparentnames = reverse $ map (accountLeafName.aname) $ takeWhile aboring parents - indent = length $ filter (not.aboring) parents - -- parents exclude the tree's root node - parents = case parentAccounts a of [] -> [] - as -> init as - --- -- the above using the newer multi balance report code: --- balanceReport' opts q j = (items, total) --- where --- MultiBalanceReport (_,mbrrows,mbrtotals) = PeriodChangeReport opts q j --- items = [(a,a',n, headDef 0 bs) | ((a,a',n), bs) <- mbrrows] --- total = headDef 0 mbrtotals - --- | Flip the sign of all amounts in a BalanceReport. -brNegate :: BalanceReport -> BalanceReport -brNegate (is, tot) = (map brItemNegate is, -tot) - where - brItemNegate (a, a', d, amt) = (a, a', d, -amt) - --- | Helper to unify a MixedAmount to a single commodity value. --- Like normaliseMixedAmount, this consolidates amounts of the same commodity --- and discards zero amounts; but this one insists on simplifying to --- a single commodity, and will throw a program-terminating error if --- this is not possible. -unifyMixedAmount :: MixedAmount -> Amount -unifyMixedAmount mixedAmount = foldl combine (num 0) (amounts mixedAmount) - where - combine amount result = - if amountIsZero amount - then result - else if amountIsZero result - then amount - else if acommodity amount == acommodity result - then amount + result - else error' "Cannot calculate percentages for accounts with multiple commodities. (Hint: Try --cost, -V or similar flags.)" - --- | Helper to calculate the percentage from two mixed. Keeps the sign of the first argument. --- Uses unifyMixedAmount to unify each argument and then divides them. -perdivide :: MixedAmount -> MixedAmount -> MixedAmount -perdivide a b = - let a' = unifyMixedAmount a - b' = unifyMixedAmount b - in if amountIsZero a' || amountIsZero b' || acommodity a' == acommodity b' - then mixed [per $ if aquantity b' == 0 then 0 else (aquantity a' / abs (aquantity b') * 100)] - else error' "Cannot calculate percentages if accounts have different commodities. (Hint: Try --cost, -V or similar flags.)" -- tests @@ -236,8 +82,8 @@ txnTieKnot Transaction{ tindex=0, tsourcepos=nullsourcepos, - tdate=parsedate "2008/01/01", - tdate2=Just $ parsedate "2009/01/01", + tdate=fromGregorian 2008 01 01, + tdate2=Just $ fromGregorian 2009 01 01, tstatus=Unmarked, tcode="", tdescription="income", @@ -259,17 +105,30 @@ let (eitems, etotal) = r (aitems, atotal) = balanceReport opts (queryFromOpts nulldate opts) journal showw (acct,acct',indent,amt) = (acct, acct', indent, showMixedAmountDebug amt) - (map showw eitems) @?= (map showw aitems) - (showMixedAmountDebug etotal) @?= (showMixedAmountDebug atotal) + (map showw aitems) @?= (map showw eitems) + (showMixedAmountDebug atotal) @?= (showMixedAmountDebug etotal) in tests "balanceReport" [ test "no args, null journal" $ - (defreportopts, nulljournal) `gives` ([], Mixed [nullamt]) + (defreportopts, nulljournal) `gives` ([], 0) ,test "no args, sample journal" $ (defreportopts, samplejournal) `gives` ([ + ("assets:bank:checking","assets:bank:checking",0, mamountp' "$1.00") + ,("assets:bank:saving","assets:bank:saving",0, mamountp' "$1.00") + ,("assets:cash","assets:cash",0, mamountp' "$-2.00") + ,("expenses:food","expenses:food",0, mamountp' "$1.00") + ,("expenses:supplies","expenses:supplies",0, mamountp' "$1.00") + ,("income:gifts","income:gifts",0, mamountp' "$-1.00") + ,("income:salary","income:salary",0, mamountp' "$-1.00") + ], + Mixed [usd 0]) + + ,test "with --tree" $ + (defreportopts{accountlistmode_=ALTree}, samplejournal) `gives` + ([ ("assets","assets",0, mamountp' "$0.00") ,("assets:bank","bank",1, mamountp' "$2.00") ,("assets:bank:checking","checking",2, mamountp' "$1.00") @@ -302,8 +161,7 @@ ,test "with date:" $ (defreportopts{query_="date:'in 2009'"}, samplejournal2) `gives` - ([], - Mixed [nullamt]) + ([], 0) ,test "with date2:" $ (defreportopts{query_="date2:'in 2009'"}, samplejournal2) `gives` @@ -324,12 +182,10 @@ ,test "with not:desc:" $ (defreportopts{query_="not:desc:income"}, samplejournal) `gives` ([ - ("assets","assets",0, mamountp' "$-1.00") - ,("assets:bank:saving","bank:saving",1, mamountp' "$1.00") - ,("assets:cash","cash",1, mamountp' "$-2.00") - ,("expenses","expenses",0, mamountp' "$2.00") - ,("expenses:food","food",1, mamountp' "$1.00") - ,("expenses:supplies","supplies",1, mamountp' "$1.00") + ("assets:bank:saving","assets:bank:saving",0, mamountp' "$1.00") + ,("assets:cash","assets:cash",0, mamountp' "$-2.00") + ,("expenses:food","expenses:food",0, mamountp' "$1.00") + ,("expenses:supplies","expenses:supplies",0, mamountp' "$1.00") ,("income:gifts","income:gifts",0, mamountp' "$-1.00") ], Mixed [usd 0]) @@ -345,7 +201,7 @@ ,test "with period on an unpopulated period" $ (defreportopts{period_= PeriodBetween (fromGregorian 2008 1 2) (fromGregorian 2008 1 3)}, samplejournal) `gives` - ([],Mixed [nullamt]) + ([], 0) diff -Nru haskell-hledger-lib-1.18.1/Hledger/Reports/BudgetReport.hs haskell-hledger-lib-1.19.1/Hledger/Reports/BudgetReport.hs --- haskell-hledger-lib-1.18.1/Hledger/Reports/BudgetReport.hs 2020-06-06 22:02:27.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Reports/BudgetReport.hs 2020-08-29 21:29:10.000000000 +0000 @@ -2,21 +2,37 @@ -} {-# LANGUAGE CPP #-} +{-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} -module Hledger.Reports.BudgetReport +module Hledger.Reports.BudgetReport ( + BudgetGoal, + BudgetTotal, + BudgetAverage, + BudgetCell, + BudgetReportRow, + BudgetReport, + budgetReport, + budgetReportAsTable, + budgetReportAsText, + -- * Helpers + reportPeriodName, + -- * Tests + tests_BudgetReport +) where import Data.Decimal +import Data.HashMap.Strict (HashMap) +import qualified Data.HashMap.Strict as HM import Data.List import Data.List.Extra (nubSort) import Data.Maybe #if !(MIN_VERSION_base(4,11,0)) import Data.Monoid ((<>)) #endif -import Data.Ord import Data.Time.Calendar import Safe --import Data.List @@ -29,15 +45,11 @@ --import Lucid as L import Text.Printf (printf) import Text.Tabular as T ---import Text.Tabular.AsciiWide import Hledger.Data ---import Hledger.Query import Hledger.Utils ---import Hledger.Read (mamountp') import Hledger.Reports.ReportOptions import Hledger.Reports.ReportTypes -import Hledger.Reports.BalanceReport (sortAccountItemsLike) import Hledger.Reports.MultiBalanceReport @@ -47,16 +59,16 @@ -- | A budget report tracks expected and actual changes per account and subperiod. type BudgetCell = (Maybe Change, Maybe BudgetGoal) -type BudgetReport = PeriodicReport AccountName BudgetCell -type BudgetReportRow = PeriodicReportRow AccountName BudgetCell +type BudgetReportRow = PeriodicReportRow DisplayName BudgetCell +type BudgetReport = PeriodicReport DisplayName BudgetCell -- | Calculate budget goals from all periodic transactions, -- actual balance changes from the regular transactions, -- and compare these to get a 'BudgetReport'. -- Unbudgeted accounts may be hidden or renamed (see budgetRollup). budgetReport :: ReportOpts -> Bool -> DateSpan -> Day -> Journal -> BudgetReport -budgetReport ropts' assrt reportspan d j = - let +budgetReport ropts' assrt reportspan d j = dbg1 "sortedbudgetreport" budgetreport + where -- Budget report demands ALTree mode to ensure subaccounts and subaccount budgets are properly handled -- and that reports with and without --empty make sense when compared side by side ropts = ropts' { accountlistmode_ = ALTree } @@ -72,63 +84,16 @@ actualj = dbg1With (("actualj"++).show.jtxns) $ budgetRollUp budgetedaccts showunbudgeted j budgetj = dbg1With (("budgetj"++).show.jtxns) $ budgetJournal assrt ropts reportspan j actualreport@(PeriodicReport actualspans _ _) = - dbg1 "actualreport" $ multiBalanceReport d ropts actualj + dbg1 "actualreport" $ multiBalanceReport d ropts{empty_=True} actualj budgetgoalreport@(PeriodicReport _ budgetgoalitems budgetgoaltotals) = - dbg1 "budgetgoalreport" $ multiBalanceReport d (ropts{empty_=True}) budgetj + dbg1 "budgetgoalreport" $ multiBalanceReport d ropts{empty_=True} budgetj budgetgoalreport' -- If no interval is specified: -- budgetgoalreport's span might be shorter actualreport's due to periodic txns; -- it should be safe to replace it with the latter, so they combine well. | interval_ ropts == NoInterval = PeriodicReport actualspans budgetgoalitems budgetgoaltotals | otherwise = budgetgoalreport - budgetreport = combineBudgetAndActual budgetgoalreport' actualreport - sortedbudgetreport = sortBudgetReport ropts j budgetreport - in - dbg1 "sortedbudgetreport" sortedbudgetreport - --- | Sort a budget report's rows according to options. -sortBudgetReport :: ReportOpts -> Journal -> BudgetReport -> BudgetReport -sortBudgetReport ropts j (PeriodicReport ps rows trow) = PeriodicReport ps sortedrows trow - where - sortedrows - | sort_amount_ ropts && tree_ ropts = sortTreeBURByActualAmount rows - | sort_amount_ ropts = sortFlatBURByActualAmount rows - | otherwise = sortByAccountDeclaration rows - - -- Sort a tree-mode budget report's rows by total actual amount at each level. - sortTreeBURByActualAmount :: [BudgetReportRow] -> [BudgetReportRow] - sortTreeBURByActualAmount rows = sortedrows - where - anamesandrows = [(prrName r, r) | r <- rows] - anames = map fst anamesandrows - atotals = [(a, tot) | PeriodicReportRow a _ _ (tot,_) _ <- rows] - accounttree = accountTree "root" anames - accounttreewithbals = mapAccounts setibalance accounttree - where - setibalance a = a{aibalance= - fromMaybe 0 $ -- when there's no actual amount, assume 0; will mess up with negative amounts ? TODO - fromMaybe (error "sortTreeByAmount 1") $ -- should not happen, but it's ugly; TODO - lookup (aname a) atotals - } - sortedaccounttree = sortAccountTreeByAmount (fromMaybe NormallyPositive $ normalbalance_ ropts) accounttreewithbals - sortedanames = map aname $ drop 1 $ flattenAccounts sortedaccounttree - sortedrows = sortAccountItemsLike sortedanames anamesandrows - - -- Sort a flat-mode budget report's rows by total actual amount. - sortFlatBURByActualAmount :: [BudgetReportRow] -> [BudgetReportRow] - sortFlatBURByActualAmount = case normalbalance_ ropts of - Just NormallyNegative -> sortOn (fst . prrTotal) - _ -> sortOn (Down . fst . prrTotal) - - -- Sort the report rows by account declaration order then account name. - -- remains at the top. - sortByAccountDeclaration rows = sortedrows - where - (unbudgetedrow,rows') = partition ((=="") . prrName) rows - anamesandrows = [(prrName r, r) | r <- rows'] - anames = map fst anamesandrows - sortedanames = sortAccountNamesByDeclaration j (tree_ ropts) anames - sortedrows = unbudgetedrow ++ sortAccountItemsLike sortedanames anamesandrows + budgetreport = combineBudgetAndActual ropts j budgetgoalreport' actualreport -- | Use all periodic transactions in the journal to generate -- budget transactions in the specified report period. @@ -136,7 +101,7 @@ -- their purpose is to set goal amounts (of change) per account and period. budgetJournal :: Bool -> ReportOpts -> DateSpan -> Journal -> Journal budgetJournal assrt _ropts reportspan j = - either error' id $ journalBalanceTransactions assrt j{ jtxns = budgetts } + either error' id $ journalBalanceTransactions assrt j{ jtxns = budgetts } -- PARTIAL: where budgetspan = dbg2 "budgetspan" $ reportspan budgetts = @@ -187,20 +152,20 @@ -- - all accounts mentioned in either report, sorted by account code or -- account name or amount as appropriate. -- -combineBudgetAndActual :: MultiBalanceReport -> MultiBalanceReport -> BudgetReport -combineBudgetAndActual - (PeriodicReport budgetperiods budgetrows (PeriodicReportRow _ _ budgettots budgetgrandtot budgetgrandavg)) - (PeriodicReport actualperiods actualrows (PeriodicReportRow _ _ actualtots actualgrandtot actualgrandavg)) = - PeriodicReport periods rows totalrow +combineBudgetAndActual :: ReportOpts -> Journal -> MultiBalanceReport -> MultiBalanceReport -> BudgetReport +combineBudgetAndActual ropts j + (PeriodicReport budgetperiods budgetrows (PeriodicReportRow _ budgettots budgetgrandtot budgetgrandavg)) + (PeriodicReport actualperiods actualrows (PeriodicReportRow _ actualtots actualgrandtot actualgrandavg)) = + PeriodicReport periods sortedrows totalrow where periods = nubSort . filter (/= nulldatespan) $ budgetperiods ++ actualperiods -- first, combine any corresponding budget goals with actual changes rows1 = - [ PeriodicReportRow acct treeindent amtandgoals totamtandgoal avgamtandgoal - | PeriodicReportRow acct treeindent actualamts actualtot actualavg <- actualrows - , let mbudgetgoals = Map.lookup acct budgetGoalsByAcct :: Maybe ([BudgetGoal], BudgetTotal, BudgetAverage) - , let budgetmamts = maybe (replicate (length periods) Nothing) (map Just . first3) mbudgetgoals :: [Maybe BudgetGoal] + [ PeriodicReportRow acct amtandgoals totamtandgoal avgamtandgoal + | PeriodicReportRow acct actualamts actualtot actualavg <- actualrows + , let mbudgetgoals = HM.lookup (displayFull acct) budgetGoalsByAcct :: Maybe ([BudgetGoal], BudgetTotal, BudgetAverage) + , let budgetmamts = maybe (Nothing <$ periods) (map Just . first3) mbudgetgoals :: [Maybe BudgetGoal] , let mbudgettot = second3 <$> mbudgetgoals :: Maybe BudgetTotal , let mbudgetavg = third3 <$> mbudgetgoals :: Maybe BudgetAverage , let acctBudgetByPeriod = Map.fromList [ (p,budgetamt) | (p, Just budgetamt) <- zip budgetperiods budgetmamts ] :: Map DateSpan BudgetGoal @@ -210,15 +175,15 @@ , let avgamtandgoal = (Just actualavg, mbudgetavg) ] where - budgetGoalsByAcct :: Map AccountName ([BudgetGoal], BudgetTotal, BudgetAverage) = - Map.fromList [ (acct, (amts, tot, avg)) - | PeriodicReportRow acct _ amts tot avg <- budgetrows ] + budgetGoalsByAcct :: HashMap AccountName ([BudgetGoal], BudgetTotal, BudgetAverage) = + HM.fromList [ (displayFull acct, (amts, tot, avg)) + | PeriodicReportRow acct amts tot avg <- budgetrows ] -- next, make rows for budget goals with no actual changes rows2 = - [ PeriodicReportRow acct treeindent amtandgoals totamtandgoal avgamtandgoal - | PeriodicReportRow acct treeindent budgetgoals budgettot budgetavg <- budgetrows - , acct `notElem` map prrName rows1 + [ PeriodicReportRow acct amtandgoals totamtandgoal avgamtandgoal + | PeriodicReportRow acct budgetgoals budgettot budgetavg <- budgetrows + , displayFull acct `notElem` map prrFullName rows1 , let acctBudgetByPeriod = Map.fromList $ zip budgetperiods budgetgoals :: Map DateSpan BudgetGoal , let amtandgoals = [ (Nothing, Map.lookup p acctBudgetByPeriod) | p <- periods ] :: [BudgetCell] , let totamtandgoal = (Nothing, Just budgettot) @@ -226,14 +191,15 @@ ] -- combine and re-sort rows - -- TODO: use MBR code - -- TODO: respect --sort-amount -- TODO: add --sort-budget to sort by budget goal amount - rows :: [BudgetReportRow] = - sortOn prrName $ rows1 ++ rows2 + sortedrows :: [BudgetReportRow] = sortRowsLike (mbrsorted unbudgetedrows ++ mbrsorted rows') rows + where + (unbudgetedrows, rows') = partition ((==unbudgetedAccountName) . prrFullName) rows + mbrsorted = map prrFullName . sortRows ropts j . map (fmap $ fromMaybe 0 . fst) + rows = rows1 ++ rows2 -- TODO: grand total & average shows 0% when there are no actual amounts, inconsistent with other cells - totalrow = PeriodicReportRow () 0 + totalrow = PeriodicReportRow () [ (Map.lookup p totActualByPeriod, Map.lookup p totBudgetByPeriod) | p <- periods ] ( Just actualgrandtot, Just budgetgrandtot ) ( Just actualgrandavg, Just budgetgrandavg ) @@ -252,7 +218,7 @@ (showDateSpan $ periodicReportSpan budgetr) (case value_ of Just (AtCost _mc) -> ", valued at cost" - Just (AtThen _mc) -> error' unsupportedValueThenError -- TODO + Just (AtThen _mc) -> error' unsupportedValueThenError -- PARTIAL: Just (AtEnd _mc) -> ", valued at period ends" Just (AtNow _mc) -> ", current value" -- XXX duplicates the above @@ -262,9 +228,10 @@ Nothing -> "") actualwidth = maximum' $ map fst amountsAndGoals budgetwidth = maximum' $ map snd amountsAndGoals - amountsAndGoals = map (\(a,g) -> (amountLength a, amountLength g)) - . concatMap prrAmounts $ prRows budgetr - where amountLength = maybe 0 (length . showMixedAmountOneLineWithoutPrice) + amountsAndGoals = + map (\(a,g) -> (amountWidth a, amountWidth g)) . concatMap prrAmounts $ prRows budgetr + where + amountWidth = maybe 0 (length . showMixedAmountElided False) -- XXX lay out actual, percentage and/or goal in the single table cell for now, should probably use separate cells showcell :: BudgetCell -> String showcell (mactual, mbudget) = actualstr ++ " " ++ budgetstr @@ -278,10 +245,12 @@ case percentage actual budget of Just pct -> printf ("[%"++show percentwidth++"s%% of %"++show budgetwidth++"s]") - (show $ roundTo 0 pct) (showbudgetamt budget) + (show $ roundTo 0 pct) (showamt' budget) Nothing -> printf ("["++replicate (percentwidth+5) ' '++"%"++show budgetwidth++"s]") - (showbudgetamt budget) + (showamt' budget) + showamt = showMixedAmountElided color_ + showamt' = showMixedAmountElided False -- XXX colored budget amounts disrupts layout -- | Calculate the percentage of actual change to budget goal to show, if any. -- If valuing at cost, both amounts are converted to cost before comparing. @@ -297,12 +266,6 @@ Nothing where maybecost = if valuationTypeIsCost ropts then mixedAmountCost else id - showamt :: MixedAmount -> String - showamt | color_ = cshowMixedAmountOneLineWithoutPrice - | otherwise = showMixedAmountOneLineWithoutPrice - - -- don't show the budget amount in color, it messes up alignment - showbudgetamt = showMixedAmountOneLineWithoutPrice maybetranspose | transpose_ = \(Table rh ch vals) -> Table ch rh (transpose vals) | otherwise = id @@ -310,22 +273,26 @@ -- | Build a 'Table' from a multi-column balance report. budgetReportAsTable :: ReportOpts -> BudgetReport -> Table String String (Maybe MixedAmount, Maybe MixedAmount) budgetReportAsTable - ropts - (PeriodicReport periods rows (PeriodicReportRow _ _ coltots grandtot grandavg)) = + ropts@ReportOpts{balancetype_} + (PeriodicReport spans rows (PeriodicReportRow _ coltots grandtot grandavg)) = addtotalrow $ Table (T.Group NoLine $ map Header accts) (T.Group NoLine $ map Header colheadings) (map rowvals rows) where - colheadings = map showDateSpanMonthAbbrev periods + colheadings = map (reportPeriodName balancetype_ spans) spans ++ [" Total" | row_total_ ropts] ++ ["Average" | average_ ropts] + accts = map renderacct rows - renderacct (PeriodicReportRow a i _ _ _) - | tree_ ropts = replicate ((i-1)*2) ' ' ++ T.unpack (accountLeafName a) - | otherwise = T.unpack $ maybeAccountNameDrop ropts a - rowvals (PeriodicReportRow _ _ as rowtot rowavg) = + -- FIXME. Have to check explicitly for which to render here, since + -- budgetReport sets accountlistmode to ALTree. Find a principled way to do + -- this. + renderacct row = case accountlistmode_ ropts of + ALTree -> replicate ((prrDepth row - 1)*2) ' ' ++ T.unpack (prrDisplayName row) + ALFlat -> T.unpack . accountNameDrop (drop_ ropts) $ prrFullName row + rowvals (PeriodicReportRow _ as rowtot rowavg) = as ++ [rowtot | row_total_ ropts] ++ [rowavg | average_ ropts] addtotalrow | no_total_ ropts = id @@ -334,12 +301,26 @@ ++ [grandavg | average_ ropts && not (null coltots)] )) --- XXX here for now --- TODO: does not work for flat-by-default reports with --flat not specified explicitly --- | Drop leading components of accounts names as specified by --drop, but only in --flat mode. -maybeAccountNameDrop :: ReportOpts -> AccountName -> AccountName -maybeAccountNameDrop opts a | flat_ opts = accountNameDrop (drop_ opts) a - | otherwise = a +-- | Make a name for the given period in a multiperiod report, given +-- the type of balance being reported and the full set of report +-- periods. This will be used as a column heading (or row heading, in +-- a register summary report). We try to pick a useful name as follows: +-- +-- - ending-balance reports: the period's end date +-- +-- - balance change reports where the periods are months and all in the same year: +-- the short month name in the current locale +-- +-- - all other balance change reports: a description of the datespan, +-- abbreviated to compact form if possible (see showDateSpan). +-- +reportPeriodName :: BalanceType -> [DateSpan] -> DateSpan -> String +reportPeriodName balancetype spans = + case balancetype of + PeriodChange -> if multiyear then showDateSpan else showDateSpanMonthAbbrev + where + multiyear = (>1) $ length $ nubSort $ map spanStartYear spans + _ -> maybe "" (showDate . prevday) . spanEnd -- tests diff -Nru haskell-hledger-lib-1.18.1/Hledger/Reports/EntriesReport.hs haskell-hledger-lib-1.19.1/Hledger/Reports/EntriesReport.hs --- haskell-hledger-lib-1.18.1/Hledger/Reports/EntriesReport.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Reports/EntriesReport.hs 2020-09-01 17:33:33.000000000 +0000 @@ -1,4 +1,4 @@ -{-# LANGUAGE OverloadedStrings, RecordWildCards, DeriveDataTypeable, FlexibleInstances, ScopedTypeVariables #-} +{-# LANGUAGE OverloadedStrings, RecordWildCards, FlexibleInstances, ScopedTypeVariables #-} {-| Journal entries report, used by the print command. @@ -14,9 +14,10 @@ ) where -import Data.List -import Data.Maybe -import Data.Ord +import Data.List (sortBy) +import Data.Maybe (fromMaybe) +import Data.Ord (comparing) +import Data.Time (fromGregorian) import Hledger.Data import Hledger.Query @@ -45,12 +46,12 @@ where periodlast = fromMaybe today $ reportPeriodOrJournalLastDay ropts j mreportlast = reportPeriodLastDay ropts - today = fromMaybe (error' "erValue: could not pick a valuation date, ReportOpts today_ is unset") today_ -- should not happen + today = fromMaybe (error' "erValue: could not pick a valuation date, ReportOpts today_ is unset") today_ -- PARTIAL: should not happen tests_EntriesReport = tests "EntriesReport" [ tests "entriesReport" [ - test "not acct" $ (length $ entriesReport defreportopts (Not $ Acct "bank") samplejournal) @?= 1 - ,test "date" $ (length $ entriesReport defreportopts (Date $ mkdatespan "2008/06/01" "2008/07/01") samplejournal) @?= 3 + test "not acct" $ (length $ entriesReport defreportopts (Not . Acct $ toRegex' "bank") samplejournal) @?= 1 + ,test "date" $ (length $ entriesReport defreportopts (Date $ DateSpan (Just $ fromGregorian 2008 06 01) (Just $ fromGregorian 2008 07 01)) samplejournal) @?= 3 ] ] diff -Nru haskell-hledger-lib-1.18.1/Hledger/Reports/MultiBalanceReport.hs haskell-hledger-lib-1.19.1/Hledger/Reports/MultiBalanceReport.hs --- haskell-hledger-lib-1.18.1/Hledger/Reports/MultiBalanceReport.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Reports/MultiBalanceReport.hs 2020-09-02 02:58:25.000000000 +0000 @@ -1,4 +1,8 @@ -{-# LANGUAGE FlexibleInstances, RecordWildCards, ScopedTypeVariables, OverloadedStrings, DeriveGeneric #-} +{-# LANGUAGE CPP #-} +{-# LANGUAGE FlexibleInstances #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE RecordWildCards #-} +{-# LANGUAGE ScopedTypeVariables #-} {-| Multi-column balance reports, used by the balance command. @@ -11,23 +15,39 @@ multiBalanceReport, multiBalanceReportWith, - balanceReportFromMultiBalanceReport, + + CompoundBalanceReport, + compoundBalanceReport, + compoundBalanceReportWith, + tableAsText, + sortRows, + sortRowsLike, + -- -- * Tests tests_MultiBalanceReport ) where -import Data.List -import Data.List.Extra (nubSort) +import Control.Monad (guard) +import Data.Foldable (toList) +import Data.List (sortOn, transpose) +import Data.List.NonEmpty (NonEmpty(..)) +import Data.HashMap.Strict (HashMap) +import qualified Data.HashMap.Strict as HM +import Data.Map (Map) import qualified Data.Map as M -import Data.Maybe -import Data.Ord -import Data.Time.Calendar -import Safe +import Data.Maybe (fromMaybe, mapMaybe) +import Data.Ord (Down(..)) +#if !(MIN_VERSION_base(4,11,0)) +import Data.Semigroup ((<>)) +#endif +import Data.Semigroup (sconcat) +import Data.Time.Calendar (Day, addDays, fromGregorian) +import Safe (headMay, lastDef, lastMay) import Text.Tabular as T -import Text.Tabular.AsciiWide +import Text.Tabular.AsciiWide (render) import Hledger.Data import Hledger.Query @@ -35,7 +55,6 @@ import Hledger.Read (mamountp') import Hledger.Reports.ReportOptions import Hledger.Reports.ReportTypes -import Hledger.Reports.BalanceReport -- | A multi balance report is a kind of periodic report, where the amounts @@ -45,9 +64,7 @@ -- -- 2. a list of rows, each containing: -- --- * the full account name --- --- * the account's depth +-- * the full account name, display name, and display depth -- -- * A list of amounts, one for each column. -- @@ -58,22 +75,25 @@ -- 3. the column totals, and the overall grand total (or zero for -- cumulative/historical reports) and grand average. -type MultiBalanceReport = PeriodicReport AccountName MixedAmount -type MultiBalanceReportRow = PeriodicReportRow AccountName MixedAmount +type MultiBalanceReport = PeriodicReport DisplayName MixedAmount +type MultiBalanceReportRow = PeriodicReportRow DisplayName MixedAmount +type CompoundBalanceReport = CompoundPeriodicReport DisplayName MixedAmount -- type alias just to remind us which AccountNames might be depth-clipped, below. type ClippedAccountName = AccountName + + -- | Generate a multicolumn balance report for the matched accounts, -- showing the change of balance, accumulated balance, or historical balance --- in each of the specified periods. Does not support tree-mode boring parent eliding. --- If the normalbalance_ option is set, it adjusts the sorting and sign of amounts --- (see ReportOpts and CompoundBalanceCommand). --- hledger's most powerful and useful report, used by the balance --- command (in multiperiod mode) and (via multiBalanceReport') by the bs/cf/is commands. +-- in each of the specified periods. If the normalbalance_ option is set, it +-- adjusts the sorting and sign of amounts (see ReportOpts and +-- CompoundBalanceCommand). hledger's most powerful and useful report, used +-- by the balance command (in multiperiod mode) and (via compoundBalanceReport) +-- by the bs/cf/is commands. multiBalanceReport :: Day -> ReportOpts -> Journal -> MultiBalanceReport multiBalanceReport today ropts j = - multiBalanceReportWith ropts q j (journalPriceOracle infer j) + multiBalanceReportWith ropts q j (journalPriceOracle infer j) where q = queryFromOpts today ropts infer = infer_value_ ropts @@ -85,300 +105,492 @@ -- once for efficiency, passing it to each report by calling this -- function directly. multiBalanceReportWith :: ReportOpts -> Query -> Journal -> PriceOracle -> MultiBalanceReport -multiBalanceReportWith ropts@ReportOpts{..} q j priceoracle = - (if invert_ then prNegate else id) $ - PeriodicReport colspans mappedsortedrows mappedtotalsrow - where - -- add a prefix to this function's debug output - dbg s = let p = "multiBalanceReport" in Hledger.Utils.dbg3 (p++" "++s) - dbg' s = let p = "multiBalanceReport" in Hledger.Utils.dbg4 (p++" "++s) - dbg'' s = let p = "multiBalanceReport" in Hledger.Utils.dbg5 (p++" "++s) - -- dbg = const id -- exclude this function from debug output - - ---------------------------------------------------------------------- - -- 1. Queries, report/column dates. - - symq = dbg "symq" $ filterQuery queryIsSym $ dbg "requested q" q - depthq = dbg "depthq" $ filterQuery queryIsDepth q - depth = queryDepth depthq - depthless = dbg "depthless" . filterQuery (not . queryIsDepth) - datelessq = dbg "datelessq" $ filterQuery (not . queryIsDateOrDate2) q - dateqcons = if date2_ then Date2 else Date - -- The date span specified by -b/-e/-p options and query args if any. - requestedspan = dbg "requestedspan" $ queryDateSpan date2_ q - -- If the requested span is open-ended, close it using the journal's end dates. - -- This can still be the null (open) span if the journal is empty. - requestedspan' = dbg "requestedspan'" $ requestedspan `spanDefaultsFrom` journalDateSpan date2_ j - -- The list of interval spans enclosing the requested span. - -- This list can be empty if the journal was empty, - -- or if hledger-ui has added its special date:-tomorrow to the query - -- and all txns are in the future. - intervalspans = dbg "intervalspans" $ splitSpan interval_ requestedspan' - -- The requested span enlarged to enclose a whole number of intervals. - -- This can be the null span if there were no intervals. - reportspan = dbg "reportspan" $ DateSpan (maybe Nothing spanStart $ headMay intervalspans) - (maybe Nothing spanEnd $ lastMay intervalspans) - mreportstart = spanStart reportspan - -- The user's query with no depth limit, and expanded to the report span - -- if there is one (otherwise any date queries are left as-is, which - -- handles the hledger-ui+future txns case above). - reportq = dbg "reportq" $ depthless $ - if reportspan == nulldatespan - then q - else And [datelessq, reportspandatesq] - where - reportspandatesq = dbg "reportspandatesq" $ dateqcons reportspan - -- The date spans to be included as report columns. - colspans :: [DateSpan] = dbg "colspans" $ splitSpan interval_ displayspan - where - displayspan - | empty_ = dbg "displayspan (-E)" reportspan -- all the requested intervals - | otherwise = dbg "displayspan" $ requestedspan `spanIntersect` matchedspan -- exclude leading/trailing empty intervals - matchedspan = dbg "matchedspan" . daysSpan $ map snd ps - - -- If doing cost valuation, convert amounts to cost. - j' = journalSelectingAmountFromOpts ropts j - - ---------------------------------------------------------------------- - -- 2. Calculate starting balances, if needed for -H - - -- Balances at report start date, from all earlier postings which otherwise match the query. - -- These balances are unvalued except maybe converted to cost. - startbals :: [(AccountName, MixedAmount)] = dbg' "startbals" $ map (\(a,_,_,b) -> (a,b)) startbalanceitems - where - (startbalanceitems,_) = dbg'' "starting balance report" $ balanceReport ropts''{value_=Nothing, percent_=False} startbalq j' - where - ropts' | tree_ ropts = ropts{no_elide_=True} - | otherwise = ropts{accountlistmode_=ALFlat} - ropts'' = ropts'{period_ = precedingperiod} - where - precedingperiod = dateSpanAsPeriod $ spanIntersect (DateSpan Nothing mreportstart) $ periodAsDateSpan period_ - -- q projected back before the report start date. - -- When there's no report start date, in case there are future txns (the hledger-ui case above), - -- we use emptydatespan to make sure they aren't counted as starting balance. - startbalq = dbg'' "startbalq" $ And [datelessq, dateqcons precedingspan] - where - precedingspan = case mreportstart of - Just d -> DateSpan Nothing (Just d) - Nothing -> emptydatespan - -- The matched accounts with a starting balance. All of these should appear - -- in the report even if they have no postings during the report period. - startaccts = dbg'' "startaccts" $ map fst startbals - -- Helpers to look up an account's starting balance. - startingBalanceFor a = fromMaybe nullmixedamt $ lookup a startbals - - ---------------------------------------------------------------------- - -- 3. Gather postings for each column. - - -- Postings matching the query within the report period. - ps :: [(Posting, Day)] = - dbg'' "ps" $ - map postingWithDate $ - journalPostings $ - filterJournalAmounts symq $ -- remove amount parts excluded by cur: - filterJournalPostings reportq $ -- remove postings not matched by (adjusted) query - j' - where - postingWithDate p = case whichDateFromOpts ropts of - PrimaryDate -> (p, postingDate p) - SecondaryDate -> (p, postingDate2 p) - - -- Group postings into their columns, with the column end dates. - colps :: [([Posting], Maybe Day)] = - dbg'' "colps" - [ (posts, end) | (DateSpan _ end, posts) <- M.toList colMap ] - where - colMap = foldr addPosting emptyMap ps - addPosting (p, d) = maybe id (M.adjust (p:)) $ latestSpanContaining colspans d - emptyMap = M.fromList . zip colspans $ repeat [] - - ---------------------------------------------------------------------- - -- 4. Calculate account balance changes in each column. - - -- In each column, gather the accounts that have postings and their change amount. - acctChangesFromPostings :: [Posting] -> [(ClippedAccountName, MixedAmount)] - acctChangesFromPostings ps = [(aname a, (if tree_ ropts then aibalance else aebalance) a) | a <- as] +multiBalanceReportWith ropts q j priceoracle = report + where + -- Queries, report/column dates. + reportspan = dbg "reportspan" $ calculateReportSpan ropts q j + reportq = dbg "reportq" $ makeReportQuery ropts reportspan q + + -- Group postings into their columns. + colps = dbg'' "colps" $ getPostingsByColumn ropts reportq j reportspan + colspans = dbg "colspans" $ M.keys colps + + -- The matched accounts with a starting balance. All of these should appear + -- in the report, even if they have no postings during the report period. + startbals = dbg' "startbals" $ startingBalances ropts reportq j reportspan + + -- Generate and postprocess the report, negating balances and taking percentages if needed + report = dbg' "report" $ + generateMultiBalanceReport ropts reportq j priceoracle colspans colps startbals + +-- | Generate a compound balance report from a list of CBCSubreportSpec. This +-- shares postings between the subreports. +compoundBalanceReport :: Day -> ReportOpts -> Journal -> [CBCSubreportSpec] + -> CompoundBalanceReport +compoundBalanceReport today ropts j = + compoundBalanceReportWith ropts q j (journalPriceOracle infer j) + where + q = queryFromOpts today ropts + infer = infer_value_ ropts + +-- | A helper for compoundBalanceReport, similar to multiBalanceReportWith. +compoundBalanceReportWith :: ReportOpts -> Query -> Journal -> PriceOracle + -> [CBCSubreportSpec] -> CompoundBalanceReport +compoundBalanceReportWith ropts q j priceoracle subreportspecs = cbr + where + -- Queries, report/column dates. + reportspan = dbg "reportspan" $ calculateReportSpan ropts q j + reportq = dbg "reportq" $ makeReportQuery ropts reportspan q + + -- Group postings into their columns. + colps = dbg'' "colps" $ getPostingsByColumn ropts{empty_=True} reportq j reportspan + colspans = dbg "colspans" $ M.keys colps + + -- The matched accounts with a starting balance. All of these should appear + -- in the report, even if they have no postings during the report period. + startbals = dbg' "startbals" $ startingBalances ropts reportq j reportspan + + subreports = map generateSubreport subreportspecs + where + generateSubreport CBCSubreportSpec{..} = + ( cbcsubreporttitle + -- Postprocess the report, negating balances and taking percentages if needed + , prNormaliseSign cbcsubreportnormalsign $ + generateMultiBalanceReport ropts' reportq j priceoracle colspans colps' startbals' + , cbcsubreportincreasestotal + ) where - as = depthLimit $ - (if tree_ ropts then id else filter ((>0).anumpostings)) $ - drop 1 $ accountsFromPostings ps - depthLimit - | tree_ ropts = filter ((depthq `matchesAccount`).aname) -- exclude deeper balances - | otherwise = clipAccountsAndAggregate depth -- aggregate deeper balances at the depth limit - colacctchanges :: [[(ClippedAccountName, MixedAmount)]] = - dbg'' "colacctchanges" $ map (acctChangesFromPostings . fst) colps - - ---------------------------------------------------------------------- - -- 5. Gather the account balance changes into a regular matrix including the accounts - -- from all columns (and with -H, accounts with starting balances), adding zeroes where needed. - - -- All account names that will be displayed, possibly depth-clipped. - displayaccts :: [ClippedAccountName] = - dbg'' "displayaccts" $ - (if tree_ ropts then expandAccountNames else id) $ - nub $ map (clipOrEllipsifyAccountName depth) $ - if empty_ || balancetype_ == HistoricalBalance - then nubSort $ startaccts ++ allpostedaccts - else allpostedaccts - where - allpostedaccts :: [AccountName] = - dbg'' "allpostedaccts" . sort . accountNamesFromPostings $ map fst ps - -- Each column's balance changes for each account, adding zeroes where needed. - colallacctchanges :: [[(ClippedAccountName, MixedAmount)]] = - dbg'' "colallacctchanges" - [ sortOn fst $ unionBy (\(a,_) (a',_) -> a == a') postedacctchanges zeroes - | postedacctchanges <- colacctchanges ] - where zeroes = [(a, nullmixedamt) | a <- displayaccts] - -- Transpose to get each account's balance changes across all columns. - acctchanges :: [(ClippedAccountName, [MixedAmount])] = - dbg'' "acctchanges" - [(a, map snd abs) | abs@((a,_):_) <- transpose colallacctchanges] -- never null, or used when null... - - ---------------------------------------------------------------------- - -- 6. Build the report rows. - - -- One row per account, with account name info, row amounts, row total and row average. - rows :: [MultiBalanceReportRow] = - dbg'' "rows" $ - [ PeriodicReportRow a (accountNameLevel a) valuedrowbals rowtot rowavg - | (a,changes) <- dbg'' "acctchanges" acctchanges - -- The row amounts to be displayed: per-period changes, - -- zero-based cumulative totals, or - -- starting-balance-based historical balances. - , let rowbals = dbg'' "rowbals" $ case balancetype_ of - PeriodChange -> changes - CumulativeChange -> drop 1 $ scanl (+) 0 changes - HistoricalBalance -> drop 1 $ scanl (+) (startingBalanceFor a) changes - -- We may be converting amounts to value, per hledger_options.m4.md "Effect of --value on reports". - , let valuedrowbals = dbg'' "valuedrowbals" $ [avalue periodlastday amt | (amt,periodlastday) <- zip rowbals lastdays] - -- The total and average for the row. - -- These are always simply the sum/average of the displayed row amounts. - -- Total for a cumulative/historical report is always zero. - , let rowtot = if balancetype_==PeriodChange then sum valuedrowbals else 0 - , let rowavg = averageMixedAmounts valuedrowbals - , empty_ || depth == 0 || any (not . mixedAmountLooksZero) valuedrowbals - ] - where - avalue periodlast = - maybe id (mixedAmountApplyValuation priceoracle styles periodlast mreportlast today multiperiod) value_ - where - -- Some things needed if doing valuation. - styles = journalCommodityStyles j - mreportlast = reportPeriodLastDay ropts - today = fromMaybe (error' "multiBalanceReport: could not pick a valuation date, ReportOpts today_ is unset") today_ -- XXX shouldn't happen - multiperiod = interval_ /= NoInterval - -- The last day of each column's subperiod. - lastdays = - map ((maybe - (error' "multiBalanceReport: expected all spans to have an end date") -- XXX should not happen - (addDays (-1))) - . spanEnd) colspans - - ---------------------------------------------------------------------- - -- 7. Sort the report rows. - - -- Sort the rows by amount or by account declaration order. This is a bit tricky. - -- TODO: is it always ok to sort report rows after report has been generated, as a separate step ? - sortedrows :: [MultiBalanceReportRow] = - dbg' "sortedrows" $ - sortrows rows - where - sortrows - | sort_amount_ && accountlistmode_ == ALTree = sortTreeMBRByAmount - | sort_amount_ = sortFlatMBRByAmount - | otherwise = sortMBRByAccountDeclaration - where - -- Sort the report rows, representing a tree of accounts, by row total at each level. - -- Similar to sortMBRByAccountDeclaration/sortAccountNamesByDeclaration. - sortTreeMBRByAmount :: [MultiBalanceReportRow] -> [MultiBalanceReportRow] - sortTreeMBRByAmount rows = sortedrows - where - anamesandrows = [(prrName r, r) | r <- rows] - anames = map fst anamesandrows - atotals = [(prrName r, prrTotal r) | r <- rows] - accounttree = accountTree "root" anames - accounttreewithbals = mapAccounts setibalance accounttree - where - -- should not happen, but it's dangerous; TODO - setibalance a = a{aibalance=fromMaybe (error "sortTreeMBRByAmount 1") $ lookup (aname a) atotals} - sortedaccounttree = sortAccountTreeByAmount (fromMaybe NormallyPositive normalbalance_) accounttreewithbals - sortedanames = map aname $ drop 1 $ flattenAccounts sortedaccounttree - sortedrows = sortAccountItemsLike sortedanames anamesandrows - - -- Sort the report rows, representing a flat account list, by row total. - sortFlatMBRByAmount = sortBy (maybeflip $ comparing (normaliseMixedAmountSquashPricesForDisplay . prrTotal)) - where - maybeflip = if normalbalance_ == Just NormallyNegative then id else flip - - -- Sort the report rows by account declaration order then account name. - sortMBRByAccountDeclaration rows = sortedrows - where - anamesandrows = [(prrName r, r) | r <- rows] - anames = map fst anamesandrows - sortedanames = sortAccountNamesByDeclaration j (tree_ ropts) anames - sortedrows = sortAccountItemsLike sortedanames anamesandrows - - ---------------------------------------------------------------------- - -- 8. Build the report totals row. - - -- Calculate the column totals. These are always the sum of column amounts. - highestlevelaccts = [a | a <- displayaccts, not $ any (`elem` displayaccts) $ init $ expandAccountName a] - colamts = transpose . map prrAmounts $ filter isHighest rows - where isHighest row = not (tree_ ropts) || prrName row `elem` highestlevelaccts - coltotals :: [MixedAmount] = - dbg'' "coltotals" $ map sum colamts - -- Calculate the grand total and average. These are always the sum/average - -- of the column totals. - [grandtotal,grandaverage] = - let amts = map ($ map sum colamts) - [if balancetype_==PeriodChange then sum else const 0 - ,averageMixedAmounts - ] - in amts - -- Totals row. - totalsrow :: PeriodicReportRow () MixedAmount = - dbg' "totalsrow" $ PeriodicReportRow () 0 coltotals grandtotal grandaverage - - ---------------------------------------------------------------------- - -- 9. Map the report rows to percentages if needed - -- It is not correct to do this before step 6 due to the total and average columns. - -- This is not done in step 6, since the report totals are calculated in 8. - -- Perform the divisions to obtain percentages - mappedsortedrows :: [MultiBalanceReportRow] = - if not percent_ then sortedrows - else dbg'' "mappedsortedrows" - [ PeriodicReportRow aname alevel - (zipWith perdivide rowvals coltotals) - (rowtotal `perdivide` grandtotal) - (rowavg `perdivide` grandaverage) - | PeriodicReportRow aname alevel rowvals rowtotal rowavg <- sortedrows - ] - mappedtotalsrow :: PeriodicReportRow () MixedAmount - | percent_ = dbg'' "mappedtotalsrow" $ PeriodicReportRow () 0 - (map (\t -> perdivide t t) coltotals) - (perdivide grandtotal grandtotal) - (perdivide grandaverage grandaverage) - | otherwise = totalsrow - --- | Generates a simple non-columnar BalanceReport, but using multiBalanceReport, --- in order to support --historical. Does not support tree-mode boring parent eliding. --- If the normalbalance_ option is set, it adjusts the sorting and sign of amounts --- (see ReportOpts and CompoundBalanceCommand). -balanceReportFromMultiBalanceReport :: ReportOpts -> Query -> Journal -> BalanceReport -balanceReportFromMultiBalanceReport opts q j = (rows', total) - where - PeriodicReport _ rows (PeriodicReportRow _ _ totals _ _) = - multiBalanceReportWith opts q j (journalPriceOracle (infer_value_ opts) j) - rows' = [( a - , if flat_ opts then a else accountLeafName a -- BalanceReport expects full account name here with --flat - , if tree_ opts then d-1 else 0 -- BalanceReport uses 0-based account depths - , headDef nullmixedamt amts -- 0 columns is illegal, should not happen, return zeroes if it does - ) | PeriodicReportRow a d amts _ _ <- rows] - total = headDef nullmixedamt totals + ropts' = ropts{normalbalance_=Just cbcsubreportnormalsign} + -- Filter the column postings according to each subreport + colps' = filter (matchesPosting $ cbcsubreportquery j) <$> colps + startbals' = HM.filterWithKey (\k _ -> matchesAccount (cbcsubreportquery j) k) startbals + + -- Sum the subreport totals by column. Handle these cases: + -- - no subreports + -- - empty subreports, having no subtotals (#588) + -- - subreports with a shorter subtotals row than the others + overalltotals = case subreports of + [] -> PeriodicReportRow () [] nullmixedamt nullmixedamt + (r:rs) -> sconcat $ fmap subreportTotal (r:|rs) + where + subreportTotal (_, sr, increasestotal) = + (if increasestotal then id else fmap negate) $ prTotals sr + cbr = CompoundPeriodicReport "" colspans subreports overalltotals --- common rendering helper, XXX here for now +-- | Calculate starting balances, if needed for -H +-- +-- Balances at report start date, from all earlier postings which otherwise match the query. +-- These balances are unvalued. +-- TODO: Do we want to check whether to bother calculating these? isHistorical +-- and startDate is not nothing, otherwise mempty? This currently gives a +-- failure with some totals which are supposed to be 0 being blank. +startingBalances :: ReportOpts -> Query -> Journal -> DateSpan -> HashMap AccountName Account +startingBalances ropts q j reportspan = acctchanges + where + acctchanges = acctChangesFromPostings ropts' startbalq . map fst $ + getPostings ropts' startbalq j + + -- q projected back before the report start date. + -- When there's no report start date, in case there are future txns (the hledger-ui case above), + -- we use emptydatespan to make sure they aren't counted as starting balance. + startbalq = dbg'' "startbalq" $ And [datelessq, precedingspanq] + datelessq = dbg "datelessq" $ filterQuery (not . queryIsDateOrDate2) q + + ropts' = case accountlistmode_ ropts of + ALTree -> ropts{no_elide_=True, period_=precedingperiod} + ALFlat -> ropts{period_=precedingperiod} + + precedingperiod = dateSpanAsPeriod . spanIntersect precedingspan . + periodAsDateSpan $ period_ ropts + precedingspan = DateSpan Nothing $ spanStart reportspan + precedingspanq = (if date2_ ropts then Date2 else Date) $ case precedingspan of + DateSpan Nothing Nothing -> emptydatespan + a -> a + +-- | Calculate the span of the report to be generated. +calculateReportSpan :: ReportOpts -> Query -> Journal -> DateSpan +calculateReportSpan ropts q j = reportspan + where + -- The date span specified by -b/-e/-p options and query args if any. + requestedspan = dbg "requestedspan" $ queryDateSpan (date2_ ropts) q + -- If the requested span is open-ended, close it using the journal's end dates. + -- This can still be the null (open) span if the journal is empty. + requestedspan' = dbg "requestedspan'" $ + requestedspan `spanDefaultsFrom` journalDateSpan (date2_ ropts) j + -- The list of interval spans enclosing the requested span. + -- This list can be empty if the journal was empty, + -- or if hledger-ui has added its special date:-tomorrow to the query + -- and all txns are in the future. + intervalspans = dbg "intervalspans" $ splitSpan (interval_ ropts) requestedspan' + -- The requested span enlarged to enclose a whole number of intervals. + -- This can be the null span if there were no intervals. + reportspan = DateSpan (spanStart =<< headMay intervalspans) + (spanEnd =<< lastMay intervalspans) + +-- | Remove any date queries and insert queries from the report span. +-- The user's query expanded to the report span +-- if there is one (otherwise any date queries are left as-is, which +-- handles the hledger-ui+future txns case above). +makeReportQuery :: ReportOpts -> DateSpan -> Query -> Query +makeReportQuery ropts reportspan q + | reportspan == nulldatespan = q + | otherwise = And [dateless q, reportspandatesq] + where + reportspandatesq = dbg "reportspandatesq" $ dateqcons reportspan + dateless = dbg "dateless" . filterQuery (not . queryIsDateOrDate2) + dateqcons = if date2_ ropts then Date2 else Date + +-- | Group postings, grouped by their column +getPostingsByColumn :: ReportOpts -> Query -> Journal -> DateSpan -> Map DateSpan [Posting] +getPostingsByColumn ropts q j reportspan = columns + where + -- Postings matching the query within the report period. + ps :: [(Posting, Day)] = dbg'' "ps" $ getPostings ropts q j + days = map snd ps + + -- The date spans to be included as report columns. + colspans = calculateColSpans ropts reportspan days + addPosting (p, d) = maybe id (M.adjust (p:)) $ latestSpanContaining colspans d + emptyMap = M.fromList . zip colspans $ repeat [] + + -- Group postings into their columns + columns = foldr addPosting emptyMap ps + +-- | Gather postings matching the query within the report period. +getPostings :: ReportOpts -> Query -> Journal -> [(Posting, Day)] +getPostings ropts q = + map (\p -> (p, date p)) . + journalPostings . + filterJournalAmounts symq . -- remove amount parts excluded by cur: + filterJournalPostings reportq -- remove postings not matched by (adjusted) query + where + symq = dbg "symq" . filterQuery queryIsSym $ dbg "requested q" q + -- The user's query with no depth limit, and expanded to the report span + -- if there is one (otherwise any date queries are left as-is, which + -- handles the hledger-ui+future txns case above). + reportq = dbg "reportq" $ depthless q + depthless = dbg "depthless" . filterQuery (not . queryIsDepth) + + date = case whichDateFromOpts ropts of + PrimaryDate -> postingDate + SecondaryDate -> postingDate2 + +-- | Calculate the DateSpans to be used for the columns of the report. +calculateColSpans :: ReportOpts -> DateSpan -> [Day] -> [DateSpan] +calculateColSpans ropts reportspan days = + splitSpan (interval_ ropts) displayspan + where + displayspan + | empty_ ropts = dbg "displayspan (-E)" reportspan -- all the requested intervals + | otherwise = dbg "displayspan" $ reportspan `spanIntersect` matchedspan -- exclude leading/trailing empty intervals + matchedspan = dbg "matchedspan" $ daysSpan days + + +-- | Gather the account balance changes into a regular matrix +-- including the accounts from all columns. +calculateAccountChanges :: ReportOpts -> Query -> [DateSpan] + -> Map DateSpan [Posting] + -> HashMap ClippedAccountName (Map DateSpan Account) +calculateAccountChanges ropts q colspans colps + | queryDepth q == Just 0 = acctchanges <> elided + | otherwise = acctchanges + where + -- Transpose to get each account's balance changes across all columns. + acctchanges = transposeMap colacctchanges + + colacctchanges :: Map DateSpan (HashMap ClippedAccountName Account) = + dbg'' "colacctchanges" $ fmap (acctChangesFromPostings ropts q) colps + + elided = HM.singleton "..." $ M.fromList [(span, nullacct) | span <- colspans] + +-- | Given a set of postings, eg for a single report column, gather +-- the accounts that have postings and calculate the change amount for +-- each. Accounts and amounts will be depth-clipped appropriately if +-- a depth limit is in effect. +acctChangesFromPostings :: ReportOpts -> Query -> [Posting] -> HashMap ClippedAccountName Account +acctChangesFromPostings ropts q ps = HM.fromList [(aname a, a) | a <- as] + where + as = filterAccounts . drop 1 $ accountsFromPostings ps + filterAccounts = case accountlistmode_ ropts of + ALTree -> filter ((depthq `matchesAccount`) . aname) -- exclude deeper balances + ALFlat -> clipAccountsAndAggregate (queryDepth depthq) . -- aggregate deeper balances at the depth limit. + filter ((0<) . anumpostings) + depthq = dbg "depthq" $ filterQuery queryIsDepth q + +-- | Accumulate and value amounts, as specified by the report options. +-- +-- Makes sure all report columns have an entry. +accumValueAmounts :: ReportOpts -> Journal -> PriceOracle -> [DateSpan] + -> HashMap ClippedAccountName Account + -> HashMap ClippedAccountName (Map DateSpan Account) + -> HashMap ClippedAccountName (Map DateSpan Account) +accumValueAmounts ropts j priceoracle colspans startbals acctchanges = -- PARTIAL: + HM.mapWithKey processRow $ acctchanges <> (mempty <$ startbals) + where + -- Must accumulate before valuing, since valuation can change without any + -- postings. Make sure every column has an entry. + processRow name changes = M.mapWithKey valueAcct . rowbals name $ changes <> zeros + + -- The row amounts to be displayed: per-period changes, + -- zero-based cumulative totals, or + -- starting-balance-based historical balances. + rowbals name changes = dbg'' "rowbals" $ case balancetype_ ropts of + PeriodChange -> changes + CumulativeChange -> snd $ M.mapAccum f nullacct changes + HistoricalBalance -> snd $ M.mapAccum f (startingBalanceFor name) changes + where f a b = let s = sumAcct a b in (s, s) + + -- Add the values of two accounts. Should be right-biased, since it's used + -- in scanl, so other properties (such as anumpostings) stay in the right place + sumAcct Account{aibalance=i1,aebalance=e1} a@Account{aibalance=i2,aebalance=e2} = + a{aibalance = i1 + i2, aebalance = e1 + e2} + + -- We may be converting amounts to value, per hledger_options.m4.md "Effect of --value on reports". + valueAcct (DateSpan _ (Just end)) acct = + acct{aibalance = value (aibalance acct), aebalance = value (aebalance acct)} + where value = avalue (addDays (-1) end) + valueAcct _ _ = error' "multiBalanceReport: expected all spans to have an end date" -- XXX should not happen + + avalue periodlast = maybe id + (mixedAmountApplyValuation priceoracle styles periodlast mreportlast today multiperiod) $ + value_ ropts + where + -- Some things needed if doing valuation. + styles = journalCommodityStyles j + mreportlast = reportPeriodLastDay ropts + today = fromMaybe (error' "multiBalanceReport: could not pick a valuation date, ReportOpts today_ is unset") $ today_ ropts -- XXX shouldn't happen + multiperiod = interval_ ropts /= NoInterval + + startingBalanceFor a = HM.lookupDefault nullacct a startbals + zeros = M.fromList [(span, nullacct) | span <- colspans] + + +-- | Lay out a set of postings grouped by date span into a regular matrix with rows +-- given by AccountName and columns by DateSpan, then generate a MultiBalanceReport +-- from the columns. +generateMultiBalanceReport :: ReportOpts -> Query -> Journal -> PriceOracle + -> [DateSpan] + -> Map DateSpan [Posting] + -> HashMap AccountName Account + -> MultiBalanceReport +generateMultiBalanceReport ropts q j priceoracle colspans colps startbals = report + where + -- Each account's balance changes across all columns. + acctchanges = dbg'' "acctchanges" $ calculateAccountChanges ropts q colspans colps + + -- Process changes into normal, cumulative, or historical amounts, plus value them + accumvalued = accumValueAmounts ropts j priceoracle colspans startbals acctchanges + + -- All account names that will be displayed, possibly depth-clipped. + displaynames = dbg'' "displaynames" $ displayedAccounts ropts q accumvalued + + -- All the rows of the report. + rows = dbg'' "rows" $ buildReportRows ropts displaynames accumvalued + + -- Calculate column totals + totalsrow = dbg' "totalsrow" $ calculateTotalsRow ropts rows + + -- Sorted report rows. + sortedrows = dbg' "sortedrows" $ sortRows ropts j rows + + -- Postprocess the report, negating balances and taking percentages if needed + report = postprocessReport ropts $ PeriodicReport colspans sortedrows totalsrow + +-- | Build the report rows. +-- +-- One row per account, with account name info, row amounts, row total and row average. +buildReportRows :: ReportOpts + -> HashMap AccountName DisplayName + -> HashMap AccountName (Map DateSpan Account) + -> [MultiBalanceReportRow] +buildReportRows ropts displaynames = toList . HM.mapMaybeWithKey mkRow + where + mkRow name accts = do + displayname <- HM.lookup name displaynames + return $ PeriodicReportRow displayname rowbals rowtot rowavg + where + rowbals = map balance $ toList accts + -- The total and average for the row. + -- These are always simply the sum/average of the displayed row amounts. + -- Total for a cumulative/historical report is always the last column. + rowtot = case balancetype_ ropts of + PeriodChange -> sum rowbals + _ -> lastDef 0 rowbals + rowavg = averageMixedAmounts rowbals + balance = case accountlistmode_ ropts of ALTree -> aibalance; ALFlat -> aebalance + +-- | Calculate accounts which are to be displayed in the report, as well as +-- their name and depth +displayedAccounts :: ReportOpts -> Query + -> HashMap AccountName (Map DateSpan Account) + -> HashMap AccountName DisplayName +displayedAccounts ropts q valuedaccts + | depth == 0 = HM.singleton "..." $ DisplayName "..." "..." 1 + | otherwise = HM.mapWithKey (\a _ -> displayedName a) displayedAccts + where + -- Accounts which are to be displayed + displayedAccts = (if depth == 0 then id else HM.filterWithKey keep) valuedaccts + where + keep name amts = isInteresting name amts || name `HM.member` interestingParents + + displayedName name = case accountlistmode_ ropts of + ALTree -> DisplayName name leaf . max 0 $ level - boringParents + ALFlat -> DisplayName name droppedName 1 + where + droppedName = accountNameDrop (drop_ ropts) name + leaf = accountNameFromComponents . reverse . map accountLeafName $ + droppedName : takeWhile notDisplayed parents + + level = max 0 $ accountNameLevel name - drop_ ropts + parents = take (level - 1) $ parentAccountNames name + boringParents = if no_elide_ ropts then 0 else length $ filter notDisplayed parents + notDisplayed = not . (`HM.member` displayedAccts) + + -- Accounts interesting for their own sake + isInteresting name amts = + d <= depth -- Throw out anything too deep + && ((empty_ ropts && all (null . asubs) amts) -- Keep all leaves when using empty_ + || not (isZeroRow balance amts)) -- Throw out anything with zero balance + where + d = accountNameLevel name + balance | ALTree <- accountlistmode_ ropts, d == depth = aibalance + | otherwise = aebalance + + -- Accounts interesting because they are a fork for interesting subaccounts + interestingParents = dbg'' "interestingParents" $ case accountlistmode_ ropts of + ALTree -> HM.filterWithKey hasEnoughSubs numSubs + ALFlat -> mempty + where + hasEnoughSubs name nsubs = nsubs >= minSubs && accountNameLevel name > drop_ ropts + minSubs = if no_elide_ ropts then 1 else 2 + + isZeroRow balance = all (mixedAmountLooksZero . balance) + depth = fromMaybe maxBound $ queryDepth q + numSubs = subaccountTallies . HM.keys $ HM.filterWithKey isInteresting valuedaccts + +-- | Sort the rows by amount or by account declaration order. +sortRows :: ReportOpts -> Journal -> [MultiBalanceReportRow] -> [MultiBalanceReportRow] +sortRows ropts j + | sort_amount_ ropts, ALTree <- accountlistmode_ ropts = sortTreeMBRByAmount + | sort_amount_ ropts, ALFlat <- accountlistmode_ ropts = sortFlatMBRByAmount + | otherwise = sortMBRByAccountDeclaration + where + -- Sort the report rows, representing a tree of accounts, by row total at each level. + -- Similar to sortMBRByAccountDeclaration/sortAccountNamesByDeclaration. + sortTreeMBRByAmount :: [MultiBalanceReportRow] -> [MultiBalanceReportRow] + sortTreeMBRByAmount rows = mapMaybe (`HM.lookup` rowMap) sortedanames + where + accounttree = accountTree "root" $ map prrFullName rows + rowMap = HM.fromList $ map (\row -> (prrFullName row, row)) rows + -- Set the inclusive balance of an account from the rows, or sum the + -- subaccounts if it's not present + accounttreewithbals = mapAccounts setibalance accounttree + setibalance a = a{aibalance = maybe (sum . map aibalance $ asubs a) prrTotal $ + HM.lookup (aname a) rowMap} + sortedaccounttree = sortAccountTreeByAmount (fromMaybe NormallyPositive $ normalbalance_ ropts) accounttreewithbals + sortedanames = map aname $ drop 1 $ flattenAccounts sortedaccounttree + + -- Sort the report rows, representing a flat account list, by row total. + sortFlatMBRByAmount :: [MultiBalanceReportRow] -> [MultiBalanceReportRow] + sortFlatMBRByAmount = case normalbalance_ ropts of + Just NormallyNegative -> sortOn amt + _ -> sortOn (Down . amt) + where amt = normaliseMixedAmountSquashPricesForDisplay . prrTotal + + -- Sort the report rows by account declaration order then account name. + sortMBRByAccountDeclaration :: [MultiBalanceReportRow] -> [MultiBalanceReportRow] + sortMBRByAccountDeclaration rows = sortRowsLike sortedanames rows + where + sortedanames = sortAccountNamesByDeclaration j (tree_ ropts) $ map prrFullName rows + +-- | Build the report totals row. +-- +-- Calculate the column totals. These are always the sum of column amounts. +calculateTotalsRow :: ReportOpts -> [MultiBalanceReportRow] -> PeriodicReportRow () MixedAmount +calculateTotalsRow ropts rows = + PeriodicReportRow () coltotals grandtotal grandaverage + where + isTopRow row = flat_ ropts || not (any (`HM.member` rowMap) parents) + where parents = init . expandAccountName $ prrFullName row + rowMap = HM.fromList $ map (\row -> (prrFullName row, row)) rows + + colamts = transpose . map prrAmounts $ filter isTopRow rows + + coltotals :: [MixedAmount] = dbg'' "coltotals" $ map sum colamts + + -- Calculate the grand total and average. These are always the sum/average + -- of the column totals. + -- Total for a cumulative/historical report is always the last column. + grandtotal = case balancetype_ ropts of + PeriodChange -> sum coltotals + _ -> lastDef 0 coltotals + grandaverage = averageMixedAmounts coltotals + +-- | Map the report rows to percentages and negate if needed +postprocessReport :: ReportOpts -> MultiBalanceReport -> MultiBalanceReport +postprocessReport ropts = maybePercent . maybeInvert + where + maybeInvert = if invert_ ropts then fmap negate else id + maybePercent = if percent_ ropts then prPercent else id + + prPercent (PeriodicReport spans rows totalrow) = + PeriodicReport spans (map percentRow rows) (percentRow totalrow) + where + percentRow (PeriodicReportRow name rowvals rowtotal rowavg) = + PeriodicReportRow name + (zipWith perdivide rowvals $ prrAmounts totalrow) + (perdivide rowtotal $ prrTotal totalrow) + (perdivide rowavg $ prrAverage totalrow) + + +-- | Transpose a Map of HashMaps to a HashMap of Maps. +-- +-- Makes sure that all DateSpans are present in all rows. +transposeMap :: Map DateSpan (HashMap AccountName a) + -> HashMap AccountName (Map DateSpan a) +transposeMap xs = M.foldrWithKey addSpan mempty xs + where + addSpan span acctmap seen = HM.foldrWithKey (addAcctSpan span) seen acctmap + + addAcctSpan span acct a = HM.alter f acct + where f = Just . M.insert span a . fromMaybe mempty + +-- | A sorting helper: sort a list of things (eg report rows) keyed by account name +-- to match the provided ordering of those same account names. +sortRowsLike :: [AccountName] -> [PeriodicReportRow DisplayName b] -> [PeriodicReportRow DisplayName b] +sortRowsLike sortedas rows = mapMaybe (`HM.lookup` rowMap) sortedas + where rowMap = HM.fromList $ map (\row -> (prrFullName row, row)) rows + +-- | Given a list of account names, find all forking parent accounts, i.e. +-- those which fork between different branches +subaccountTallies :: [AccountName] -> HashMap AccountName Int +subaccountTallies = foldr incrementParent mempty . expandAccountNames + where + incrementParent a = HM.insertWith (+) (parentAccountName a) 1 + +-- | A helper: what percentage is the second mixed amount of the first ? +-- Keeps the sign of the first amount. +-- Uses unifyMixedAmount to unify each argument and then divides them. +-- Both amounts should be in the same, single commodity. +-- This can call error if the arguments are not right. +perdivide :: MixedAmount -> MixedAmount -> MixedAmount +perdivide a b = fromMaybe (error' errmsg) $ do -- PARTIAL: + a' <- unifyMixedAmount a + b' <- unifyMixedAmount b + guard $ amountIsZero a' || amountIsZero b' || acommodity a' == acommodity b' + return $ mixed [per $ if aquantity b' == 0 then 0 else aquantity a' / abs (aquantity b') * 100] + where errmsg = "Cannot calculate percentages if accounts have different commodities (Hint: Try --cost, -V or similar flags.)" + +-- Local debug helper +-- add a prefix to this function's debug output +dbg s = let p = "multiBalanceReport" in Hledger.Utils.dbg3 (p++" "++s) +dbg' s = let p = "multiBalanceReport" in Hledger.Utils.dbg4 (p++" "++s) +dbg'' s = let p = "multiBalanceReport" in Hledger.Utils.dbg5 (p++" "++s) +-- dbg = const id -- exclude this function from debug output + +-- common rendering helper, XXX here for now tableAsText :: ReportOpts -> (a -> String) -> Table String String a -> String tableAsText (ReportOpts{pretty_tables_ = pretty}) showcell = unlines @@ -398,12 +610,12 @@ tests_MultiBalanceReport = tests "MultiBalanceReport" [ let - amt0 = Amount {acommodity="$", aquantity=0, aprice=Nothing, astyle=AmountStyle {ascommodityside = L, ascommodityspaced = False, asprecision = 2, asdecimalpoint = Just '.', asdigitgroups = Nothing}, aismultiplier=False} + amt0 = Amount {acommodity="$", aquantity=0, aprice=Nothing, astyle=AmountStyle {ascommodityside = L, ascommodityspaced = False, asprecision = Precision 2, asdecimalpoint = Just '.', asdigitgroups = Nothing}, aismultiplier=False} (opts,journal) `gives` r = do let (eitems, etotal) = r (PeriodicReport _ aitems atotal) = multiBalanceReport nulldate opts journal - showw (PeriodicReportRow acct indent lAmt amt amt') - = (acct, accountLeafName acct, indent, map showMixedAmountDebug lAmt, showMixedAmountDebug amt, showMixedAmountDebug amt') + showw (PeriodicReportRow a lAmt amt amt') + = (displayFull a, displayName a, displayDepth a, map showMixedAmountDebug lAmt, showMixedAmountDebug amt, showMixedAmountDebug amt') (map showw aitems) @?= (map showw eitems) showMixedAmountDebug (prrTotal atotal) @?= showMixedAmountDebug etotal -- we only check the sum of the totals in @@ -414,10 +626,10 @@ ,test "with -H on a populated period" $ (defreportopts{period_= PeriodBetween (fromGregorian 2008 1 1) (fromGregorian 2008 1 2), balancetype_=HistoricalBalance}, samplejournal) `gives` ( - [ PeriodicReportRow "assets:bank:checking" 3 [mamountp' "$1.00"] (Mixed [nullamt]) (Mixed [amt0 {aquantity=1}]) - , PeriodicReportRow "income:salary" 2 [mamountp' "$-1.00"] (Mixed [nullamt]) (Mixed [amt0 {aquantity=(-1)}]) + [ PeriodicReportRow (flatDisplayName "assets:bank:checking") [mamountp' "$1.00"] (mamountp' "$1.00") (Mixed [amt0 {aquantity=1}]) + , PeriodicReportRow (flatDisplayName "income:salary") [mamountp' "$-1.00"] (mamountp' "$-1.00") (Mixed [amt0 {aquantity=(-1)}]) ], - Mixed [nullamt]) + mamountp' "$0.00") -- ,test "a valid history on an empty period" $ -- (defreportopts{period_= PeriodBetween (fromGregorian 2008 1 2) (fromGregorian 2008 1 3), balancetype_=HistoricalBalance}, samplejournal) `gives` diff -Nru haskell-hledger-lib-1.18.1/Hledger/Reports/PostingsReport.hs haskell-hledger-lib-1.19.1/Hledger/Reports/PostingsReport.hs --- haskell-hledger-lib-1.18.1/Hledger/Reports/PostingsReport.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Reports/PostingsReport.hs 2020-09-01 17:33:33.000000000 +0000 @@ -4,7 +4,6 @@ -} -{-# LANGUAGE DeriveDataTypeable #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE RecordWildCards #-} @@ -72,11 +71,11 @@ where reportspan = adjustReportDates ropts q j whichdate = whichDateFromOpts ropts - depth = queryDepth q + mdepth = queryDepth q styles = journalCommodityStyles j priceoracle = journalPriceOracle infer_value_ j multiperiod = interval_ /= NoInterval - today = fromMaybe (error' "postingsReport: could not pick a valuation date, ReportOpts today_ is unset") today_ + today = fromMaybe (error' "postingsReport: could not pick a valuation date, ReportOpts today_ is unset") today_ -- PARTIAL: -- postings to be included in the report, and similarly-matched postings before the report start date (precedingps, reportps) = matchedPostingsBeforeAndDuring ropts q j reportspan @@ -84,7 +83,7 @@ -- Postings, or summary postings with their subperiod's end date, to be displayed. displayps :: [(Posting, Maybe Day)] | multiperiod = - let summaryps = summarisePostingsByInterval interval_ whichdate depth showempty reportspan reportps + let summaryps = summarisePostingsByInterval interval_ whichdate mdepth showempty reportspan reportps in [(pvalue p lastday, Just periodend) | (p, periodend) <- summaryps, let lastday = addDays (-1) periodend] | otherwise = [(pvalue p reportorjournallast, Nothing) | p <- reportps] @@ -95,13 +94,13 @@ where mreportlast = reportPeriodLastDay ropts reportorjournallast = - fromMaybe (error' "postingsReport: expected a non-empty journal") $ -- XXX shouldn't happen + fromMaybe (error' "postingsReport: expected a non-empty journal") $ -- PARTIAL: shouldn't happen reportPeriodOrJournalLastDay ropts j -- Posting report items ready for display. items = - dbg1 "postingsReport items" $ - postingsReportItems displayps (nullposting,Nothing) whichdate depth startbal runningcalc startnum + dbg4 "postingsReport items" $ + postingsReportItems displayps (nullposting,Nothing) whichdate mdepth startbal runningcalc startnum where -- In historical mode we'll need a starting balance, which we -- may be converting to value per hledger_options.m4.md "Effect @@ -118,7 +117,7 @@ -- XXX constrain valuation type to AtDate daybeforereportstart here ? where daybeforereportstart = - maybe (error' "postingsReport: expected a non-empty journal") -- XXX shouldn't happen + maybe (error' "postingsReport: expected a non-empty journal") -- PARTIAL: shouldn't happen (addDays (-1)) $ reportPeriodOrJournalStart ropts j @@ -145,16 +144,16 @@ adjustReportDates opts q j = reportspan where -- see also multiBalanceReport - requestedspan = dbg1 "requestedspan" $ queryDateSpan' q -- span specified by -b/-e/-p options and query args - journalspan = dbg1 "journalspan" $ dates `spanUnion` date2s -- earliest and latest dates (or date2s) in the journal + requestedspan = dbg3 "requestedspan" $ queryDateSpan' q -- span specified by -b/-e/-p options and query args + journalspan = dbg3 "journalspan" $ dates `spanUnion` date2s -- earliest and latest dates (or date2s) in the journal where dates = journalDateSpan False j date2s = journalDateSpan True j - requestedspanclosed = dbg1 "requestedspanclosed" $ requestedspan `spanDefaultsFrom` journalspan -- if open-ended, close it using the journal's dates (if any) - intervalspans = dbg1 "intervalspans" $ splitSpan (interval_ opts) requestedspanclosed -- get the whole intervals enclosing that - mreportstart = dbg1 "reportstart" $ maybe Nothing spanStart $ headMay intervalspans -- start of the first interval, or open ended - mreportend = dbg1 "reportend" $ maybe Nothing spanEnd $ lastMay intervalspans -- end of the last interval, or open ended - reportspan = dbg1 "reportspan" $ DateSpan mreportstart mreportend -- the requested span enlarged to whole intervals if possible + requestedspanclosed = dbg3 "requestedspanclosed" $ requestedspan `spanDefaultsFrom` journalspan -- if open-ended, close it using the journal's dates (if any) + intervalspans = dbg3 "intervalspans" $ splitSpan (interval_ opts) requestedspanclosed -- get the whole intervals enclosing that + mreportstart = dbg3 "reportstart" $ maybe Nothing spanStart $ headMay intervalspans -- start of the first interval, or open ended + mreportend = dbg3 "reportend" $ maybe Nothing spanEnd $ lastMay intervalspans -- end of the last interval, or open ended + reportspan = dbg3 "reportspan" $ DateSpan mreportstart mreportend -- the requested span enlarged to whole intervals if possible -- | Find postings matching a given query, within a given date span, -- and also any similarly-matched postings before that date span. @@ -162,45 +161,46 @@ -- A helper for the postings report. matchedPostingsBeforeAndDuring :: ReportOpts -> Query -> Journal -> DateSpan -> ([Posting],[Posting]) matchedPostingsBeforeAndDuring opts q j (DateSpan mstart mend) = - dbg1 "beforeps, duringps" $ span (beforestartq `matchesPosting`) beforeandduringps + dbg5 "beforeps, duringps" $ span (beforestartq `matchesPosting`) beforeandduringps where - beforestartq = dbg1 "beforestartq" $ dateqtype $ DateSpan Nothing mstart + beforestartq = dbg3 "beforestartq" $ dateqtype $ DateSpan Nothing mstart beforeandduringps = - dbg1 "ps5" $ sortOn sortdate $ -- sort postings by date or date2 - dbg1 "ps4" $ (if invert_ opts then map negatePostingAmount else id) $ -- with --invert, invert amounts - dbg1 "ps3" $ map (filterPostingAmount symq) $ -- remove amount parts which the query's cur: terms would exclude - dbg1 "ps2" $ (if related_ opts then concatMap relatedPostings else id) $ -- with -r, replace each with its sibling postings - dbg1 "ps1" $ filter (beforeandduringq `matchesPosting`) $ -- filter postings by the query, with no start date or depth limit + dbg5 "ps5" $ sortOn sortdate $ -- sort postings by date or date2 + dbg5 "ps4" $ (if invert_ opts then map negatePostingAmount else id) $ -- with --invert, invert amounts + dbg5 "ps3" $ map (filterPostingAmount symq) $ -- remove amount parts which the query's cur: terms would exclude + dbg5 "ps2" $ (if related_ opts then concatMap relatedPostings else id) $ -- with -r, replace each with its sibling postings + dbg5 "ps1" $ filter (beforeandduringq `matchesPosting`) $ -- filter postings by the query, with no start date or depth limit journalPostings $ journalSelectingAmountFromOpts opts j -- maybe convert to cost early, will be seen by amt:. XXX what about converting to value ? where - beforeandduringq = dbg1 "beforeandduringq" $ And [depthless $ dateless q, beforeendq] + beforeandduringq = dbg4 "beforeandduringq" $ And [depthless $ dateless q, beforeendq] where depthless = filterQuery (not . queryIsDepth) dateless = filterQuery (not . queryIsDateOrDate2) beforeendq = dateqtype $ DateSpan Nothing mend sortdate = if date2_ opts then postingDate2 else postingDate - symq = dbg1 "symq" $ filterQuery queryIsSym q + symq = dbg4 "symq" $ filterQuery queryIsSym q dateqtype | queryIsDate2 dateq || (queryIsDate dateq && date2_ opts) = Date2 | otherwise = Date where - dateq = dbg1 "dateq" $ filterQuery queryIsDateOrDate2 $ dbg1 "q" q -- XXX confused by multiple date:/date2: ? + dateq = dbg4 "dateq" $ filterQuery queryIsDateOrDate2 $ dbg4 "q" q -- XXX confused by multiple date:/date2: ? -- | Generate postings report line items from a list of postings or (with -- non-Nothing dates attached) summary postings. -postingsReportItems :: [(Posting,Maybe Day)] -> (Posting,Maybe Day) -> WhichDate -> Int -> MixedAmount -> (Int -> MixedAmount -> MixedAmount -> MixedAmount) -> Int -> [PostingsReportItem] +postingsReportItems :: [(Posting,Maybe Day)] -> (Posting,Maybe Day) -> WhichDate -> Maybe Int -> MixedAmount -> (Int -> MixedAmount -> MixedAmount -> MixedAmount) -> Int -> [PostingsReportItem] postingsReportItems [] _ _ _ _ _ _ = [] -postingsReportItems ((p,menddate):ps) (pprev,menddateprev) wd d b runningcalcfn itemnum = i:(postingsReportItems ps (p,menddate) wd d b' runningcalcfn (itemnum+1)) - where - i = mkpostingsReportItem showdate showdesc wd menddate p' b' - (showdate, showdesc) | isJust menddate = (menddate /= menddateprev, False) - | otherwise = (isfirstintxn || isdifferentdate, isfirstintxn) - isfirstintxn = ptransaction p /= ptransaction pprev - isdifferentdate = case wd of PrimaryDate -> postingDate p /= postingDate pprev - SecondaryDate -> postingDate2 p /= postingDate2 pprev - p' = p{paccount= clipOrEllipsifyAccountName d $ paccount p} - b' = runningcalcfn itemnum b (pamount p) +postingsReportItems ((p,menddate):ps) (pprev,menddateprev) wd d b runningcalcfn itemnum = + i:(postingsReportItems ps (p,menddate) wd d b' runningcalcfn (itemnum+1)) + where + i = mkpostingsReportItem showdate showdesc wd menddate p' b' + (showdate, showdesc) | isJust menddate = (menddate /= menddateprev, False) + | otherwise = (isfirstintxn || isdifferentdate, isfirstintxn) + isfirstintxn = ptransaction p /= ptransaction pprev + isdifferentdate = case wd of PrimaryDate -> postingDate p /= postingDate pprev + SecondaryDate -> postingDate2 p /= postingDate2 pprev + p' = p{paccount= clipOrEllipsifyAccountName d $ paccount p} + b' = runningcalcfn itemnum b (pamount p) -- | Generate one postings report line item, containing the posting, -- the current running balance, and optionally the posting date and/or @@ -221,11 +221,11 @@ -- | Convert a list of postings into summary postings, one per interval, -- aggregated to the specified depth if any. -- Each summary posting will have a non-Nothing interval end date. -summarisePostingsByInterval :: Interval -> WhichDate -> Int -> Bool -> DateSpan -> [Posting] -> [SummaryPosting] -summarisePostingsByInterval interval wd depth showempty reportspan ps = concatMap summarisespan $ splitSpan interval reportspan - where - summarisespan s = summarisePostingsInDateSpan s wd depth showempty (postingsinspan s) - postingsinspan s = filter (isPostingInDateSpan' wd s) ps +summarisePostingsByInterval :: Interval -> WhichDate -> Maybe Int -> Bool -> DateSpan -> [Posting] -> [SummaryPosting] +summarisePostingsByInterval interval wd mdepth showempty reportspan ps = concatMap summarisespan $ splitSpan interval reportspan + where + summarisespan s = summarisePostingsInDateSpan s wd mdepth showempty (postingsinspan s) + postingsinspan s = filter (isPostingInDateSpan' wd s) ps -- | Given a date span (representing a report interval) and a list of -- postings within it, aggregate the postings into one summary posting per @@ -239,28 +239,27 @@ -- The showempty flag includes spans with no postings and also postings -- with 0 amount. -- -summarisePostingsInDateSpan :: DateSpan -> WhichDate -> Int -> Bool -> [Posting] -> [SummaryPosting] -summarisePostingsInDateSpan (DateSpan b e) wd depth showempty ps - | null ps && (isNothing b || isNothing e) = [] - | null ps && showempty = [(summaryp, e')] - | otherwise = summarypes - where - postingdate = if wd == PrimaryDate then postingDate else postingDate2 - b' = fromMaybe (maybe nulldate postingdate $ headMay ps) b - e' = fromMaybe (maybe (addDays 1 nulldate) postingdate $ lastMay ps) e - summaryp = nullposting{pdate=Just b'} - clippedanames | depth > 0 = nub $ map (clipAccountName depth) anames - | otherwise = ["..."] - summaryps | depth > 0 = [summaryp{paccount=a,pamount=balance a} | a <- clippedanames] - | otherwise = [summaryp{paccount="...",pamount=sum $ map pamount ps}] - summarypes = map (, e') $ (if showempty then id else filter (not . mixedAmountLooksZero . pamount)) summaryps - anames = nubSort $ map paccount ps - -- aggregate balances by account, like ledgerFromJournal, then do depth-clipping - accts = accountsFromPostings ps - balance a = maybe nullmixedamt bal $ lookupAccount a accts - where - bal = if isclipped a then aibalance else aebalance - isclipped a = accountNameLevel a >= depth +summarisePostingsInDateSpan :: DateSpan -> WhichDate -> Maybe Int -> Bool -> [Posting] -> [SummaryPosting] +summarisePostingsInDateSpan (DateSpan b e) wd mdepth showempty ps + | null ps && (isNothing b || isNothing e) = [] + | null ps && showempty = [(summaryp, e')] + | otherwise = summarypes + where + postingdate = if wd == PrimaryDate then postingDate else postingDate2 + b' = fromMaybe (maybe nulldate postingdate $ headMay ps) b + e' = fromMaybe (maybe (addDays 1 nulldate) postingdate $ lastMay ps) e + summaryp = nullposting{pdate=Just b'} + clippedanames = nub $ map (clipAccountName mdepth) anames + summaryps | mdepth == Just 0 = [summaryp{paccount="...",pamount=sum $ map pamount ps}] + | otherwise = [summaryp{paccount=a,pamount=balance a} | a <- clippedanames] + summarypes = map (, e') $ (if showempty then id else filter (not . mixedAmountLooksZero . pamount)) summaryps + anames = nubSort $ map paccount ps + -- aggregate balances by account, like ledgerFromJournal, then do depth-clipping + accts = accountsFromPostings ps + balance a = maybe nullmixedamt bal $ lookupAccount a accts + where + bal = if isclipped a then aibalance else aebalance + isclipped a = maybe True (accountNameLevel a >=) mdepth negatePostingAmount :: Posting -> Posting negatePostingAmount p = p { pamount = negate $ pamount p } @@ -277,16 +276,16 @@ (Any, samplejournal) `gives` 13 -- register --depth just clips account names (Depth 2, samplejournal) `gives` 13 - (And [Depth 1, StatusQ Cleared, Acct "expenses"], samplejournal) `gives` 2 - (And [And [Depth 1, StatusQ Cleared], Acct "expenses"], samplejournal) `gives` 2 + (And [Depth 1, StatusQ Cleared, Acct (toRegex' "expenses")], samplejournal) `gives` 2 + (And [And [Depth 1, StatusQ Cleared], Acct (toRegex' "expenses")], samplejournal) `gives` 2 -- with query and/or command-line options (length $ snd $ postingsReport defreportopts Any samplejournal) @?= 13 (length $ snd $ postingsReport defreportopts{interval_=Months 1} Any samplejournal) @?= 11 (length $ snd $ postingsReport defreportopts{interval_=Months 1, empty_=True} Any samplejournal) @?= 20 - (length $ snd $ postingsReport defreportopts (Acct "assets:bank:checking") samplejournal) @?= 5 + (length $ snd $ postingsReport defreportopts (Acct (toRegex' "assets:bank:checking")) samplejournal) @?= 5 -- (defreportopts, And [Acct "a a", Acct "'b"], samplejournal2) `gives` 0 - -- [(Just (parsedate "2008-01-01","income"),assets:bank:checking $1,$1) + -- [(Just (fromGregorian 2008 01 01,"income"),assets:bank:checking $1,$1) -- ,(Nothing,income:salary $-1,0) -- ,(Just (2008-06-01,"gift"),assets:bank:checking $1,$1) -- ,(Nothing,income:gifts $-1,0) @@ -374,7 +373,7 @@ j <- samplejournal let gives displayexpr = (registerdates (postingsReportAsText opts $ postingsReport opts (queryFromOpts date1 opts) j) `is`) - where opts = defreportopts{display_=Just displayexpr} + where opts = defreportopts "d<[2008/6/2]" `gives` ["2008/01/01","2008/06/01"] "d<=[2008/6/2]" `gives` ["2008/01/01","2008/06/01","2008/06/02"] "d=[2008/6/2]" `gives` ["2008/06/02"] @@ -432,12 +431,12 @@ -} ,test "summarisePostingsByInterval" $ - summarisePostingsByInterval (Quarters 1) PrimaryDate 99999 False (DateSpan Nothing Nothing) [] @?= [] + summarisePostingsByInterval (Quarters 1) PrimaryDate Nothing False (DateSpan Nothing Nothing) [] @?= [] -- ,tests_summarisePostingsInDateSpan = [ -- "summarisePostingsInDateSpan" ~: do -- let gives (b,e,depth,showempty,ps) = - -- (summarisePostingsInDateSpan (mkdatespan b e) depth showempty ps `is`) + -- (summarisePostingsInDateSpan (DateSpan b e) depth showempty ps `is`) -- let ps = -- [ -- nullposting{lpdescription="desc",lpaccount="expenses:food:groceries",lpamount=Mixed [usd 1]} @@ -449,25 +448,25 @@ -- [] -- ("2008/01/01","2009/01/01",0,9999,True,[]) `gives` -- [ - -- nullposting{lpdate=parsedate "2008/01/01",lpdescription="- 2008/12/31"} + -- nullposting{lpdate=fromGregorian 2008 01 01,lpdescription="- 2008/12/31"} -- ] -- ("2008/01/01","2009/01/01",0,9999,False,ts) `gives` -- [ - -- nullposting{lpdate=parsedate "2008/01/01",lpdescription="- 2008/12/31",lpaccount="expenses:food", lpamount=Mixed [usd 4]} - -- ,nullposting{lpdate=parsedate "2008/01/01",lpdescription="- 2008/12/31",lpaccount="expenses:food:dining", lpamount=Mixed [usd 10]} - -- ,nullposting{lpdate=parsedate "2008/01/01",lpdescription="- 2008/12/31",lpaccount="expenses:food:groceries",lpamount=Mixed [usd 1]} + -- nullposting{lpdate=fromGregorian 2008 01 01,lpdescription="- 2008/12/31",lpaccount="expenses:food", lpamount=Mixed [usd 4]} + -- ,nullposting{lpdate=fromGregorian 2008 01 01,lpdescription="- 2008/12/31",lpaccount="expenses:food:dining", lpamount=Mixed [usd 10]} + -- ,nullposting{lpdate=fromGregorian 2008 01 01,lpdescription="- 2008/12/31",lpaccount="expenses:food:groceries",lpamount=Mixed [usd 1]} -- ] -- ("2008/01/01","2009/01/01",0,2,False,ts) `gives` -- [ - -- nullposting{lpdate=parsedate "2008/01/01",lpdescription="- 2008/12/31",lpaccount="expenses:food",lpamount=Mixed [usd 15]} + -- nullposting{lpdate=fromGregorian 2008 01 01,lpdescription="- 2008/12/31",lpaccount="expenses:food",lpamount=Mixed [usd 15]} -- ] -- ("2008/01/01","2009/01/01",0,1,False,ts) `gives` -- [ - -- nullposting{lpdate=parsedate "2008/01/01",lpdescription="- 2008/12/31",lpaccount="expenses",lpamount=Mixed [usd 15]} + -- nullposting{lpdate=fromGregorian 2008 01 01,lpdescription="- 2008/12/31",lpaccount="expenses",lpamount=Mixed [usd 15]} -- ] -- ("2008/01/01","2009/01/01",0,0,False,ts) `gives` -- [ - -- nullposting{lpdate=parsedate "2008/01/01",lpdescription="- 2008/12/31",lpaccount="",lpamount=Mixed [usd 15]} + -- nullposting{lpdate=fromGregorian 2008 01 01,lpdescription="- 2008/12/31",lpaccount="",lpamount=Mixed [usd 15]} -- ] ] diff -Nru haskell-hledger-lib-1.18.1/Hledger/Reports/ReportOptions.hs haskell-hledger-lib-1.19.1/Hledger/Reports/ReportOptions.hs --- haskell-hledger-lib-1.18.1/Hledger/Reports/ReportOptions.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Reports/ReportOptions.hs 2020-09-01 17:33:33.000000000 +0000 @@ -4,7 +4,9 @@ -} -{-# LANGUAGE OverloadedStrings, RecordWildCards, LambdaCase, DeriveDataTypeable #-} +{-# LANGUAGE LambdaCase #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE RecordWildCards #-} module Hledger.Reports.ReportOptions ( ReportOpts(..), @@ -46,15 +48,15 @@ where import Control.Applicative ((<|>)) -import Data.Data (Data) import Data.List.Extra (nubSort) -import Data.Maybe +import Data.Maybe (fromMaybe, isJust) import qualified Data.Text as T -import Data.Typeable (Typeable) -import Data.Time.Calendar -import Data.Default -import Safe -import System.Console.ANSI (hSupportsANSI) +import Data.Time.Calendar (Day, addDays, fromGregorian) +import Data.Default (Default(..)) +import Safe (lastDef, lastMay) + +import System.Console.ANSI (hSupportsANSIColor) +import System.Environment (lookupEnv) import System.IO (stdout) import Text.Megaparsec.Custom @@ -71,20 +73,21 @@ | HistoricalBalance -- ^ The historical ending balance, including the effect of -- all postings before the report period. Unless altered by, -- a query, this is what you would see on a bank statement. - deriving (Eq,Show,Data,Typeable) + deriving (Eq,Show) instance Default BalanceType where def = PeriodChange -- | Should accounts be displayed: in the command's default style, hierarchically, or as a flat list ? -data AccountListMode = ALDefault | ALTree | ALFlat deriving (Eq, Show, Data, Typeable) +data AccountListMode = ALFlat | ALTree deriving (Eq, Show) -instance Default AccountListMode where def = ALDefault +instance Default AccountListMode where def = ALFlat -- | Standard options for customising report filtering and output. -- Most of these correspond to standard hledger command-line options -- or query arguments, but not all. Some are used only by certain -- commands, as noted below. data ReportOpts = ReportOpts { + -- for most reports: today_ :: Maybe Day -- ^ The current date. A late addition to ReportOpts. -- Optional, but when set it may affect some reports: -- Reports use it when picking a -V valuation date. @@ -95,7 +98,6 @@ ,value_ :: Maybe ValuationType -- ^ What value should amounts be converted to ? ,infer_value_ :: Bool -- ^ Infer market prices from transactions ? ,depth_ :: Maybe Int - ,display_ :: Maybe DisplayExp -- XXX unused ? ,date2_ :: Bool ,empty_ :: Bool ,no_elide_ :: Bool @@ -105,9 +107,11 @@ -- and quoted if needed (see 'quoteIfNeeded') -- ,average_ :: Bool - -- register command only + -- for posting reports (register) ,related_ :: Bool - -- balance-type commands only + -- for account transactions reports (aregister) + ,txn_dates_ :: Bool + -- for balance reports (bal, bs, cf, is) ,balancetype_ :: BalanceType ,accountlistmode_ :: AccountListMode ,drop_ :: Int @@ -126,9 +130,13 @@ -- sign normalisation, converting normally negative subreports to -- normally positive for a more conventional display. ,color_ :: Bool + -- ^ Whether to use ANSI color codes in text output. + -- Influenced by the --color/colour flag (cf CliOptions), + -- whether stdout is an interactive terminal, and the value of + -- TERM and existence of NO_COLOR environment variables. ,forecast_ :: Maybe DateSpan ,transpose_ :: Bool - } deriving (Show, Data, Typeable) + } deriving (Show) instance Default ReportOpts where def = defreportopts @@ -168,7 +176,9 @@ rawOptsToReportOpts rawopts = checkReportOpts <$> do let rawopts' = checkRawOpts rawopts d <- getCurrentDay - color <- hSupportsANSI stdout + no_color <- isJust <$> lookupEnv "NO_COLOR" + supports_color <- hSupportsANSIColor stdout + let colorflag = stringopt "color" rawopts return defreportopts{ today_ = Just d ,period_ = periodFromRawOpts d rawopts' @@ -176,8 +186,7 @@ ,statuses_ = statusesFromRawOpts rawopts' ,value_ = valuationTypeFromRawOpts rawopts' ,infer_value_ = boolopt "infer-value" rawopts' - ,depth_ = maybeintopt "depth" rawopts' - ,display_ = maybedisplayopt d rawopts' + ,depth_ = maybeposintopt "depth" rawopts' ,date2_ = boolopt "date2" rawopts' ,empty_ = boolopt "empty" rawopts' ,no_elide_ = boolopt "no-elide" rawopts' @@ -186,16 +195,20 @@ ,query_ = unwords . map quoteIfNeeded $ listofstringopt "args" rawopts' -- doesn't handle an arg like "" right ,average_ = boolopt "average" rawopts' ,related_ = boolopt "related" rawopts' + ,txn_dates_ = boolopt "txn-dates" rawopts' ,balancetype_ = balancetypeopt rawopts' ,accountlistmode_ = accountlistmodeopt rawopts' - ,drop_ = intopt "drop" rawopts' + ,drop_ = posintopt "drop" rawopts' ,row_total_ = boolopt "row-total" rawopts' ,no_total_ = boolopt "no-total" rawopts' ,sort_amount_ = boolopt "sort-amount" rawopts' ,percent_ = boolopt "percent" rawopts' ,invert_ = boolopt "invert" rawopts' ,pretty_tables_ = boolopt "pretty-tables" rawopts' - ,color_ = color + ,color_ = and [not no_color + ,not $ colorflag `elem` ["never","no"] + ,colorflag `elem` ["always","yes"] || supports_color + ] ,forecast_ = forecastPeriodFromRawOpts d rawopts' ,transpose_ = boolopt "transpose" rawopts' } @@ -225,7 +238,7 @@ accountlistmodeopt :: RawOpts -> AccountListMode accountlistmodeopt = - fromMaybe ALDefault . choiceopt parse where + fromMaybe ALFlat . choiceopt parse where parse = \case "tree" -> Just ALTree "flat" -> Just ALFlat @@ -308,7 +321,7 @@ (\e -> usageError $ "could not parse period option: "++customErrorBundlePretty e) extractIntervalOrNothing $ parsePeriodExpr - (error' "intervalFromRawOpts: did not expect to need today's date here") -- should not happen; we are just getting the interval, which does not use the reference date + (error' "intervalFromRawOpts: did not expect to need today's date here") -- PARTIAL: should not happen; we are just getting the interval, which does not use the reference date (stripquotes $ T.pack v) | n == "daily" = Just $ Days 1 | n == "weekly" = Just $ Weeks 1 @@ -327,7 +340,7 @@ Just str -> either (\e -> usageError $ "could not parse forecast period : "++customErrorBundlePretty e) (Just . snd) $ parsePeriodExpr d $ stripquotes $ T.pack str - + -- | Extract the interval from the parsed -p/--period expression. -- Return Nothing if an interval is not explicitly defined. extractIntervalOrNothing :: (Interval, DateSpan) -> Maybe Interval @@ -400,15 +413,6 @@ Just (AtDefault _) -> True _ -> False -type DisplayExp = String - -maybedisplayopt :: Day -> RawOpts -> Maybe DisplayExp -maybedisplayopt d rawopts = - maybe Nothing (Just . regexReplaceBy "\\[.+?\\]" fixbracketeddatestr) $ maybestringopt "display" rawopts - where - fixbracketeddatestr "" = "" - fixbracketeddatestr s = "[" ++ fixSmartDateStr d (T.pack $ init $ tail s) ++ "]" - -- | Select the Transaction date accessor based on --date2. transactionDateFn :: ReportOpts -> (Transaction -> Day) transactionDateFn ReportOpts{..} = if date2_ then transactionDate2 else tdate @@ -423,10 +427,11 @@ -- | Legacy-compatible convenience aliases for accountlistmode_. tree_ :: ReportOpts -> Bool -tree_ = (==ALTree) . accountlistmode_ +tree_ ReportOpts{accountlistmode_ = ALTree} = True +tree_ ReportOpts{accountlistmode_ = ALFlat} = False flat_ :: ReportOpts -> Bool -flat_ = (==ALFlat) . accountlistmode_ +flat_ = not . tree_ -- depthFromOpts :: ReportOpts -> Int -- depthFromOpts opts = min (fromMaybe 99999 $ depth_ opts) (queryDepth $ queryFromOpts nulldate opts) @@ -441,34 +446,30 @@ _ -> id -- | Convert report options and arguments to a query. +-- If there is a parsing problem, this function calls error. queryFromOpts :: Day -> ReportOpts -> Query -queryFromOpts d ReportOpts{..} = simplifyQuery $ And $ [flagsq, argsq] +queryFromOpts d ropts = simplifyQuery . And $ [flagsq, argsq] where - flagsq = And $ - [(if date2_ then Date2 else Date) $ periodAsDateSpan period_] - ++ (if real_ then [Real True] else []) - ++ (if empty_ then [Empty True] else []) -- ? - ++ [Or $ map StatusQ statuses_] - ++ (maybe [] ((:[]) . Depth) depth_) - argsq = fst $ parseQuery d (T.pack query_) + flagsq = queryFromOptsOnly d ropts + argsq = fst $ either error' id $ parseQuery d (T.pack $ query_ ropts) -- PARTIAL: -- | Convert report options to a query, ignoring any non-flag command line arguments. queryFromOptsOnly :: Day -> ReportOpts -> Query -queryFromOptsOnly _d ReportOpts{..} = simplifyQuery flagsq +queryFromOptsOnly _d ReportOpts{..} = simplifyQuery $ And flagsq where - flagsq = And $ - [(if date2_ then Date2 else Date) $ periodAsDateSpan period_] - ++ (if real_ then [Real True] else []) - ++ (if empty_ then [Empty True] else []) -- ? - ++ [Or $ map StatusQ statuses_] - ++ (maybe [] ((:[]) . Depth) depth_) + flagsq = consIf Real real_ + . consIf Empty empty_ + . consJust Depth depth_ + $ [ (if date2_ then Date2 else Date) $ periodAsDateSpan period_ + , Or $ map StatusQ statuses_ + ] + consIf f b = if b then (f True:) else id + consJust f = maybe id ((:) . f) -- | Convert report options and arguments to query options. +-- If there is a parsing problem, this function calls error. queryOptsFromOpts :: Day -> ReportOpts -> [QueryOpt] -queryOptsFromOpts d ReportOpts{..} = flagsqopts ++ argsqopts - where - flagsqopts = [] - argsqopts = snd $ parseQuery d (T.pack query_) +queryOptsFromOpts d = snd . either error' id . parseQuery d . T.pack . query_ -- PARTIAL: -- Report dates. @@ -557,17 +558,16 @@ tests_ReportOptions = tests "ReportOptions" [ test "queryFromOpts" $ do queryFromOpts nulldate defreportopts @?= Any - queryFromOpts nulldate defreportopts{query_="a"} @?= Acct "a" - queryFromOpts nulldate defreportopts{query_="desc:'a a'"} @?= Desc "a a" - queryFromOpts nulldate defreportopts{period_=PeriodFrom (parsedate "2012/01/01"),query_="date:'to 2013'" } - @?= (Date $ mkdatespan "2012/01/01" "2013/01/01") - queryFromOpts nulldate defreportopts{query_="date2:'in 2012'"} @?= (Date2 $ mkdatespan "2012/01/01" "2013/01/01") - queryFromOpts nulldate defreportopts{query_="'a a' 'b"} @?= Or [Acct "a a", Acct "'b"] + queryFromOpts nulldate defreportopts{query_="a"} @?= Acct (toRegexCI' "a") + queryFromOpts nulldate defreportopts{query_="desc:'a a'"} @?= Desc (toRegexCI' "a a") + queryFromOpts nulldate defreportopts{period_=PeriodFrom (fromGregorian 2012 01 01),query_="date:'to 2013'" } + @?= (Date $ DateSpan (Just $ fromGregorian 2012 01 01) (Just $ fromGregorian 2013 01 01)) + queryFromOpts nulldate defreportopts{query_="date2:'in 2012'"} @?= (Date2 $ DateSpan (Just $ fromGregorian 2012 01 01) (Just $ fromGregorian 2013 01 01)) + queryFromOpts nulldate defreportopts{query_="'a a' 'b"} @?= Or [Acct $ toRegexCI' "a a", Acct $ toRegexCI' "'b"] ,test "queryOptsFromOpts" $ do queryOptsFromOpts nulldate defreportopts @?= [] queryOptsFromOpts nulldate defreportopts{query_="a"} @?= [] - queryOptsFromOpts nulldate defreportopts{period_=PeriodFrom (parsedate "2012/01/01") + queryOptsFromOpts nulldate defreportopts{period_=PeriodFrom (fromGregorian 2012 01 01) ,query_="date:'to 2013'"} @?= [] ] - diff -Nru haskell-hledger-lib-1.18.1/Hledger/Reports/ReportTypes.hs haskell-hledger-lib-1.19.1/Hledger/Reports/ReportTypes.hs --- haskell-hledger-lib-1.18.1/Hledger/Reports/ReportTypes.hs 2020-03-14 16:07:07.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Reports/ReportTypes.hs 2020-08-29 21:29:10.000000000 +0000 @@ -1,8 +1,10 @@ {- | New common report types, used by the BudgetReport for now, perhaps all reports later. -} +{-# LANGUAGE CPP #-} {-# LANGUAGE DeriveAnyClass #-} -{-# LANGUAGE DeriveGeneric #-} +{-# LANGUAGE DeriveFunctor #-} +{-# LANGUAGE DeriveGeneric #-} module Hledger.Reports.ReportTypes ( PeriodicReport(..) @@ -15,14 +17,31 @@ , Average , periodicReportSpan -, prNegate , prNormaliseSign +, prMapName +, prMapMaybeName + +, CompoundPeriodicReport(..) +, CBCSubreportSpec(..) + +, DisplayName(..) +, flatDisplayName +, treeDisplayName + +, prrFullName +, prrDisplayName +, prrDepth ) where import Data.Aeson import Data.Decimal +import Data.Maybe (mapMaybe) +#if !(MIN_VERSION_base(4,11,0)) +import Data.Semigroup (Semigroup(..)) +#endif import GHC.Generics (Generic) import Hledger.Data +import Hledger.Query (Query) type Percentage = Decimal @@ -67,16 +86,23 @@ -- significant. Usually displayed as report columns. , prRows :: [PeriodicReportRow a b] -- One row per account in the report. , prTotals :: PeriodicReportRow () b -- The grand totals row. - } deriving (Show, Generic, ToJSON) + } deriving (Show, Functor, Generic, ToJSON) data PeriodicReportRow a b = PeriodicReportRow { prrName :: a -- An account name. - , prrDepth :: Int -- Indent level for displaying this account name in tree mode. 0, 1, 2... , prrAmounts :: [b] -- The data value for each subperiod. , prrTotal :: b -- The total of this row's values. , prrAverage :: b -- The average of this row's values. - } deriving (Show, Generic, ToJSON) + } deriving (Show, Functor, Generic, ToJSON) + +instance Num b => Semigroup (PeriodicReportRow a b) where + (PeriodicReportRow _ amts1 t1 a1) <> (PeriodicReportRow n2 amts2 t2 a2) = + PeriodicReportRow n2 (sumPadded amts1 amts2) (t1 + t2) (a1 + a2) + where + sumPadded (a:as) (b:bs) = (a + b) : sumPadded as bs + sumPadded as [] = as + sumPadded [] bs = bs -- | Figure out the overall date span of a PeridicReport periodicReportSpan :: PeriodicReport a b -> DateSpan @@ -86,13 +112,89 @@ -- | Given a PeriodicReport and its normal balance sign, -- if it is known to be normally negative, convert it to normally positive. prNormaliseSign :: Num b => NormalSign -> PeriodicReport a b -> PeriodicReport a b -prNormaliseSign NormallyNegative = prNegate -prNormaliseSign _ = id +prNormaliseSign NormallyNegative = fmap negate +prNormaliseSign NormallyPositive = id + +-- | Map a function over the row names. +prMapName :: (a -> b) -> PeriodicReport a c -> PeriodicReport b c +prMapName f report = report{prRows = map (prrMapName f) $ prRows report} + +-- | Map a function over the row names, possibly discarding some. +prMapMaybeName :: (a -> Maybe b) -> PeriodicReport a c -> PeriodicReport b c +prMapMaybeName f report = report{prRows = mapMaybe (prrMapMaybeName f) $ prRows report} + +-- | Map a function over the row names of the PeriodicReportRow. +prrMapName :: (a -> b) -> PeriodicReportRow a c -> PeriodicReportRow b c +prrMapName f row = row{prrName = f $ prrName row} + +-- | Map maybe a function over the row names of the PeriodicReportRow. +prrMapMaybeName :: (a -> Maybe b) -> PeriodicReportRow a c -> Maybe (PeriodicReportRow b c) +prrMapMaybeName f row = case f $ prrName row of + Nothing -> Nothing + Just a -> Just row{prrName = a} + + +-- | A compound balance report has: +-- +-- * an overall title +-- +-- * the period (date span) of each column +-- +-- * one or more named, normal-positive multi balance reports, +-- with columns corresponding to the above, and a flag indicating +-- whether they increased or decreased the overall totals +-- +-- * a list of overall totals for each column, and their grand total and average +-- +-- It is used in compound balance report commands like balancesheet, +-- cashflow and incomestatement. +data CompoundPeriodicReport a b = CompoundPeriodicReport + { cbrTitle :: String + , cbrDates :: [DateSpan] + , cbrSubreports :: [(String, PeriodicReport a b, Bool)] + , cbrTotals :: PeriodicReportRow () b + } deriving (Show, Generic, ToJSON) --- | Flip the sign of all amounts in a PeriodicReport. -prNegate :: Num b => PeriodicReport a b -> PeriodicReport a b -prNegate (PeriodicReport colspans rows totalsrow) = - PeriodicReport colspans (map rowNegate rows) (rowNegate totalsrow) - where - rowNegate (PeriodicReportRow name indent amts tot avg) = - PeriodicReportRow name indent (map negate amts) (-tot) (-avg) +-- | Description of one subreport within a compound balance report. +-- Part of a "CompoundBalanceCommandSpec", but also used in hledger-lib. +data CBCSubreportSpec = CBCSubreportSpec + { cbcsubreporttitle :: String + , cbcsubreportquery :: Journal -> Query + , cbcsubreportnormalsign :: NormalSign + , cbcsubreportincreasestotal :: Bool + } + + +-- | A full name, display name, and depth for an account. +data DisplayName = DisplayName + { displayFull :: AccountName + , displayName :: AccountName + , displayDepth :: Int + } deriving (Show, Eq, Ord) + +instance ToJSON DisplayName where + toJSON = toJSON . displayFull + toEncoding = toEncoding . displayFull + +-- | Construct a flat display name, where the full name is also displayed at +-- depth 1 +flatDisplayName :: AccountName -> DisplayName +flatDisplayName a = DisplayName a a 1 + +-- | Construct a tree display name, where only the leaf is displayed at its +-- given depth +treeDisplayName :: AccountName -> DisplayName +treeDisplayName a = DisplayName a (accountLeafName a) (accountNameLevel a) + +-- | Get the full, canonical, name of a PeriodicReportRow tagged by a +-- DisplayName. +prrFullName :: PeriodicReportRow DisplayName a -> AccountName +prrFullName = displayFull . prrName + +-- | Get the display name of a PeriodicReportRow tagged by a DisplayName. +prrDisplayName :: PeriodicReportRow DisplayName a -> AccountName +prrDisplayName = displayName . prrName + +-- | Get the display depth of a PeriodicReportRow tagged by a DisplayName. +prrDepth :: PeriodicReportRow DisplayName a -> Int +prrDepth = displayDepth . prrName diff -Nru haskell-hledger-lib-1.18.1/Hledger/Reports/TransactionsReport.hs haskell-hledger-lib-1.19.1/Hledger/Reports/TransactionsReport.hs --- haskell-hledger-lib-1.18.1/Hledger/Reports/TransactionsReport.hs 2020-01-28 17:23:35.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Reports/TransactionsReport.hs 2020-09-01 17:33:33.000000000 +0000 @@ -1,4 +1,4 @@ -{-# LANGUAGE OverloadedStrings, RecordWildCards, DeriveDataTypeable, FlexibleInstances #-} +{-# LANGUAGE OverloadedStrings, RecordWildCards, FlexibleInstances #-} {-| A transactions report. Like an EntriesReport, but with more diff -Nru haskell-hledger-lib-1.18.1/Hledger/Reports.hs haskell-hledger-lib-1.19.1/Hledger/Reports.hs --- haskell-hledger-lib-1.18.1/Hledger/Reports.hs 2020-01-28 17:23:35.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Reports.hs 2020-09-01 17:33:33.000000000 +0000 @@ -1,4 +1,4 @@ -{-# LANGUAGE OverloadedStrings, RecordWildCards, DeriveDataTypeable, FlexibleInstances #-} +{-# LANGUAGE OverloadedStrings, RecordWildCards, FlexibleInstances #-} {-| Generate several common kinds of report from a journal, as \"*Report\" - diff -Nru haskell-hledger-lib-1.18.1/Hledger/Utils/Debug.hs haskell-hledger-lib-1.19.1/Hledger/Utils/Debug.hs --- haskell-hledger-lib-1.18.1/Hledger/Utils/Debug.hs 2020-06-21 01:40:43.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Utils/Debug.hs 2020-08-29 21:29:10.000000000 +0000 @@ -17,9 +17,9 @@ 3 report options selection 4 report generation 5 report generation, more detail -6 command line parsing -7 input file reading -8 input file reading, more detail +6 input file reading +7 input file reading, more detail +8 command line parsing 9 any other rarely needed / more in-depth info Tip: when debugging with GHCI, the first run after loading Debug.hs sets the diff -Nru haskell-hledger-lib-1.18.1/Hledger/Utils/Parse.hs haskell-hledger-lib-1.19.1/Hledger/Utils/Parse.hs --- haskell-hledger-lib-1.18.1/Hledger/Utils/Parse.hs 2020-03-14 16:07:07.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Utils/Parse.hs 2020-08-29 21:29:10.000000000 +0000 @@ -20,10 +20,14 @@ showDateParseError, nonspace, isNonNewlineSpace, - spacenonewline, restofline, eolof, + spacenonewline, + skipNonNewlineSpaces, + skipNonNewlineSpaces1, + skipNonNewlineSpaces', + -- * re-exports CustomErr ) @@ -104,7 +108,7 @@ fromparse = either parseerror id parseerror :: (Show t, Show (Token t), Show e) => ParseErrorBundle t e -> a -parseerror e = error' $ showParseError e +parseerror e = error' $ showParseError e -- PARTIAL: showParseError :: (Show t, Show (Token t), Show e) @@ -125,9 +129,26 @@ spacenonewline :: (Stream s, Char ~ Token s) => ParsecT CustomErr s m Char spacenonewline = satisfy isNonNewlineSpace +{-# INLINABLE spacenonewline #-} restofline :: TextParser m String restofline = anySingle `manyTill` eolof +-- Skip many non-newline spaces. +skipNonNewlineSpaces :: (Stream s, Token s ~ Char) => ParsecT CustomErr s m () +skipNonNewlineSpaces = () <$ takeWhileP Nothing isNonNewlineSpace +{-# INLINABLE skipNonNewlineSpaces #-} + +-- Skip many non-newline spaces, failing if there are none. +skipNonNewlineSpaces1 :: (Stream s, Token s ~ Char) => ParsecT CustomErr s m () +skipNonNewlineSpaces1 = () <$ takeWhile1P Nothing isNonNewlineSpace +{-# INLINABLE skipNonNewlineSpaces1 #-} + +-- Skip many non-newline spaces, returning True if any have been skipped. +skipNonNewlineSpaces' :: (Stream s, Token s ~ Char) => ParsecT CustomErr s m Bool +skipNonNewlineSpaces' = True <$ skipNonNewlineSpaces1 <|> pure False +{-# INLINABLE skipNonNewlineSpaces' #-} + + eolof :: TextParser m () eolof = (newline >> return ()) <|> eof diff -Nru haskell-hledger-lib-1.18.1/Hledger/Utils/Regex.hs haskell-hledger-lib-1.19.1/Hledger/Utils/Regex.hs --- haskell-hledger-lib-1.18.1/Hledger/Utils/Regex.hs 2020-06-04 21:01:48.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Utils/Regex.hs 2020-09-03 23:42:32.000000000 +0000 @@ -1,4 +1,7 @@ -{-# LANGUAGE ScopedTypeVariables #-} +{-# LANGUAGE FlexibleInstances #-} +{-# LANGUAGE MultiParamTypeClasses #-} +{-# LANGUAGE ScopedTypeVariables #-} +{-# LANGUAGE TypeSynonymInstances #-} {-| Easy regular expression helpers, currently based on regex-tdfa. These should: @@ -19,11 +22,19 @@ - work with simple strings -Regex strings are automatically compiled into regular expressions the -first time they are seen, and these are cached. If you use a huge -number of unique regular expressions this might lead to increased -memory usage. Several functions have memoised variants (*Memo), which -also trade space for time. +Regex strings are automatically compiled into regular expressions the first +time they are seen, and these are cached. If you use a huge number of unique +regular expressions this might lead to increased memory usage. Several +functions have memoised variants (*Memo), which also trade space for time. + +Currently two APIs are provided: + +- The old partial one (with ' suffixes') which will call error on any problem + (eg with malformed regexps). This comes from hledger's origin as a + command-line tool. + +- The new total one which will return an error message. This is better for + long-running apps like hledger-web. Current limitations: @@ -32,117 +43,208 @@ -} module Hledger.Utils.Regex ( + -- * Regexp type and constructors + Regexp(reString) + ,toRegex + ,toRegexCI + ,toRegex' + ,toRegexCI' -- * type aliases - Regexp ,Replacement - -- * standard regex operations - ,regexMatches - ,regexMatchesCI + ,RegexError + -- * total regex operations + ,regexMatch ,regexReplace - ,regexReplaceCI - ,regexReplaceMemo - ,regexReplaceCIMemo - ,regexReplaceBy - ,regexReplaceByCI + ,regexReplaceUnmemo + ,regexReplaceAllBy ) where -import Data.Array -import Data.Char +import Control.Monad (foldM) +import Data.Aeson (ToJSON(..), Value(String)) +import Data.Array ((!), elems, indices) +import Data.Char (isDigit) import Data.List (foldl') -import Data.Maybe (fromMaybe) import Data.MemoUgly (memo) +import qualified Data.Text as T import Text.Regex.TDFA ( - Regex, CompOption(..), ExecOption(..), defaultCompOpt, defaultExecOpt, - makeRegexOptsM, AllMatches(getAllMatches), match, (=~), MatchText + Regex, CompOption(..), defaultCompOpt, defaultExecOpt, + makeRegexOptsM, AllMatches(getAllMatches), match, MatchText, + RegexLike(..), RegexMaker(..), RegexOptions(..), RegexContext(..) ) import Hledger.Utils.UTF8IOCompat (error') -- | Regular expression. Extended regular expression-ish syntax ? But does not support eg (?i) syntax. -type Regexp = String +data Regexp + = Regexp { reString :: String, reCompiled :: Regex } + | RegexpCI { reString :: String, reCompiled :: Regex } + +instance Eq Regexp where + Regexp s1 _ == Regexp s2 _ = s1 == s2 + RegexpCI s1 _ == RegexpCI s2 _ = s1 == s2 + _ == _ = False + +instance Ord Regexp where + Regexp s1 _ `compare` Regexp s2 _ = s1 `compare` s2 + RegexpCI s1 _ `compare` RegexpCI s2 _ = s1 `compare` s2 + Regexp _ _ `compare` RegexpCI _ _ = LT + RegexpCI _ _ `compare` Regexp _ _ = GT + +instance Show Regexp where + showsPrec d r = showParen (d > app_prec) $ reCons . showsPrec (app_prec+1) (reString r) + where app_prec = 10 + reCons = case r of Regexp _ _ -> showString "Regexp " + RegexpCI _ _ -> showString "RegexpCI " + +instance Read Regexp where + readsPrec d r = readParen (d > app_prec) (\r -> [(toRegexCI' m,t) | + ("RegexCI",s) <- lex r, + (m,t) <- readsPrec (app_prec+1) s]) r + ++ readParen (d > app_prec) (\r -> [(toRegex' m, t) | + ("Regex",s) <- lex r, + (m,t) <- readsPrec (app_prec+1) s]) r + where app_prec = 10 + +instance ToJSON Regexp where + toJSON (Regexp s _) = String . T.pack $ "Regexp " ++ s + toJSON (RegexpCI s _) = String . T.pack $ "RegexpCI " ++ s + +instance RegexLike Regexp String where + matchOnce = matchOnce . reCompiled + matchAll = matchAll . reCompiled + matchCount = matchCount . reCompiled + matchTest = matchTest . reCompiled + matchAllText = matchAllText . reCompiled + matchOnceText = matchOnceText . reCompiled + +instance RegexContext Regexp String String where + match = match . reCompiled + matchM = matchM . reCompiled + +-- Convert a Regexp string to a compiled Regex, or return an error message. +toRegex :: String -> Either RegexError Regexp +toRegex = memo $ \s -> mkRegexErr s (Regexp s <$> makeRegexM s) + +-- Like toRegex, but make a case-insensitive Regex. +toRegexCI :: String -> Either RegexError Regexp +toRegexCI = memo $ \s -> mkRegexErr s (RegexpCI s <$> makeRegexOptsM defaultCompOpt{caseSensitive=False} defaultExecOpt s) + +-- | Make a nice error message for a regexp error. +mkRegexErr :: String -> Maybe a -> Either RegexError a +mkRegexErr s = maybe (Left errmsg) Right + where errmsg = "this regular expression could not be compiled: " ++ s + +-- Convert a Regexp string to a compiled Regex, throw an error +toRegex' :: String -> Regexp +toRegex' = either error' id . toRegex + +-- Like toRegex', but make a case-insensitive Regex. +toRegexCI' :: String -> Regexp +toRegexCI' = either error' id . toRegexCI -- | A replacement pattern. May include numeric backreferences (\N). type Replacement = String --- | Convert our string-based Regexp to a real Regex. --- Or if it's not well formed, call error with a "malformed regexp" message. -toRegex :: Regexp -> Regex -toRegex = memo (compileRegexOrError defaultCompOpt defaultExecOpt) - --- | Like toRegex but make a case-insensitive Regex. -toRegexCI :: Regexp -> Regex -toRegexCI = memo (compileRegexOrError defaultCompOpt{caseSensitive=False} defaultExecOpt) - -compileRegexOrError :: CompOption -> ExecOption -> Regexp -> Regex -compileRegexOrError compopt execopt r = - fromMaybe - (errorWithoutStackTrace $ "this regular expression could not be compiled: " ++ show r) $ - makeRegexOptsM compopt execopt r - --- regexMatch' :: RegexContext Regexp String a => Regexp -> String -> a --- regexMatch' r s = s =~ (toRegex r) - -regexMatches :: Regexp -> String -> Bool -regexMatches = flip (=~) - -regexMatchesCI :: Regexp -> String -> Bool -regexMatchesCI r = match (toRegexCI r) - --- | Replace all occurrences of the regexp, transforming each match with the given function. -regexReplaceBy :: Regexp -> (String -> String) -> String -> String -regexReplaceBy r = replaceAllBy (toRegex r) - -regexReplaceByCI :: Regexp -> (String -> String) -> String -> String -regexReplaceByCI r = replaceAllBy (toRegexCI r) - --- | Replace all occurrences of the regexp with the replacement --- pattern. The replacement pattern supports numeric backreferences --- (\N) but no other RE syntax. -regexReplace :: Regexp -> Replacement -> String -> String -regexReplace re = replaceRegex (toRegex re) +-- | An regular expression compilation/processing error message. +type RegexError = String -regexReplaceCI :: Regexp -> Replacement -> String -> String -regexReplaceCI re = replaceRegex (toRegexCI re) +-- helpers --- | A memoising version of regexReplace. Caches the result for each --- search pattern, replacement pattern, target string tuple. -regexReplaceMemo :: Regexp -> Replacement -> String -> String -regexReplaceMemo re repl = memo (regexReplace re repl) +-- | Test whether a Regexp matches a String. This is an alias for `matchTest` for consistent +-- naming. +regexMatch :: Regexp -> String -> Bool +regexMatch = matchTest -regexReplaceCIMemo :: Regexp -> Replacement -> String -> String -regexReplaceCIMemo re repl = memo (regexReplaceCI re repl) +-------------------------------------------------------------------------------- +-- new total functions --- +-- | A memoising version of regexReplace. Caches the result for each +-- search pattern, replacement pattern, target string tuple. +regexReplace :: Regexp -> Replacement -> String -> Either RegexError String +regexReplace re repl = memo $ regexReplaceUnmemo re repl -replaceRegex :: Regex -> Replacement -> String -> String -replaceRegex re repl s = foldl (replaceMatch repl) s (reverse $ match re s :: [MatchText String]) +-- helpers: -replaceMatch :: Replacement -> String -> MatchText String -> String -replaceMatch replpat s matchgroups = pre ++ repl ++ post +-- Replace this regular expression with this replacement pattern in this +-- string, or return an error message. +regexReplaceUnmemo :: Regexp -> Replacement -> String -> Either RegexError String +regexReplaceUnmemo re repl s = foldM (replaceMatch repl) s (reverse $ match (reCompiled re) s :: [MatchText String]) where - ((_,(off,len)):_) = elems matchgroups -- groups should have 0-based indexes, and there should always be at least one, since this is a match - (pre, post') = splitAt off s - post = drop len post' - repl = replaceAllBy (toRegex "\\\\[0-9]+") (replaceBackReference matchgroups) replpat - -replaceBackReference :: MatchText String -> String -> String -replaceBackReference grps ('\\':s@(_:_)) | all isDigit s = - case read s of n | n `elem` indices grps -> fst (grps ! n) - _ -> error' $ "no match group exists for backreference \"\\"++s++"\"" -replaceBackReference _ s = error' $ "replaceBackReference called on non-numeric-backreference \""++s++"\", shouldn't happen" - --- - --- http://stackoverflow.com/questions/9071682/replacement-substition-with-haskell-regex-libraries : --- | Replace all occurrences of a regexp in a string, transforming each match with the given function. -replaceAllBy :: Regex -> (String -> String) -> String -> String -replaceAllBy re f s = start end + -- Replace one match within the string with the replacement text + -- appropriate for this match. Or return an error message. + replaceMatch :: Replacement -> String -> MatchText String -> Either RegexError String + replaceMatch replpat s matchgroups = + erepl >>= \repl -> Right $ pre ++ repl ++ post + where + ((_,(off,len)):_) = elems matchgroups -- groups should have 0-based indexes, and there should always be at least one, since this is a match + (pre, post') = splitAt off s + post = drop len post' + -- The replacement text: the replacement pattern with all + -- numeric backreferences replaced by the appropriate groups + -- from this match. Or an error message. + erepl = regexReplaceAllByM backrefRegex (lookupMatchGroup matchgroups) replpat + where + -- Given some match groups and a numeric backreference, + -- return the referenced group text, or an error message. + lookupMatchGroup :: MatchText String -> String -> Either RegexError String + lookupMatchGroup grps ('\\':s@(_:_)) | all isDigit s = + case read s of n | n `elem` indices grps -> Right $ fst (grps ! n) + _ -> Left $ "no match group exists for backreference \"\\"++s++"\"" + lookupMatchGroup _ s = Left $ "lookupMatchGroup called on non-numeric-backreference \""++s++"\", shouldn't happen" + backrefRegex = toRegex' "\\\\[0-9]+" -- PARTIAL: should not fail + +-- regexReplace' :: Regexp -> Replacement -> String -> String +-- regexReplace' re repl s = +-- foldl (replaceMatch repl) s (reverse $ match (reCompiled re) s :: [MatchText String]) +-- where +-- replaceMatch :: Replacement -> String -> MatchText String -> String +-- replaceMatch replpat s matchgroups = pre ++ repl ++ post +-- where +-- ((_,(off,len)):_) = elems matchgroups -- groups should have 0-based indexes, and there should always be at least one, since this is a match +-- (pre, post') = splitAt off s +-- post = drop len post' +-- repl = regexReplaceAllBy backrefRegex (lookupMatchGroup matchgroups) replpat +-- where +-- lookupMatchGroup :: MatchText String -> String -> String +-- lookupMatchGroup grps ('\\':s@(_:_)) | all isDigit s = +-- case read s of n | n `elem` indices grps -> fst (grps ! n) +-- -- PARTIAL: +-- _ -> error' $ "no match group exists for backreference \"\\"++s++"\"" +-- lookupMatchGroup _ s = error' $ "lookupMatchGroup called on non-numeric-backreference \""++s++"\", shouldn't happen" +-- backrefRegex = toRegex' "\\\\[0-9]+" -- PARTIAL: should not fail + + +-- helpers + +-- adapted from http://stackoverflow.com/questions/9071682/replacement-substition-with-haskell-regex-libraries: + +-- Replace all occurrences of a regexp in a string, transforming each match +-- with the given pure function. +regexReplaceAllBy :: Regexp -> (String -> String) -> String -> String +regexReplaceAllBy re transform s = prependdone rest where - (_, end, start) = foldl' go (0, s, id) $ (getAllMatches $ match re s :: [(Int, Int)]) - go (ind,read,write) (off,len) = - let (skip, start) = splitAt (off - ind) read - (matched, remaining) = splitAt len start - in (off + len, remaining, write . (skip++) . (f matched ++)) - + (_, rest, prependdone) = foldl' go (0, s, id) matches + where + matches = getAllMatches $ match (reCompiled re) s :: [(Int, Int)] -- offset and length + go :: (Int,String,String->String) -> (Int,Int) -> (Int,String,String->String) + go (pos,todo,prepend) (off,len) = + let (prematch, matchandrest) = splitAt (off - pos) todo + (matched, rest) = splitAt len matchandrest + in (off + len, rest, prepend . (prematch++) . (transform matched ++)) + +-- Replace all occurrences of a regexp in a string, transforming each match +-- with the given monadic function. Eg if the monad is Either, a Left result +-- from the transform function short-circuits and is returned as the overall +-- result. +regexReplaceAllByM :: forall m. Monad m => Regexp -> (String -> m String) -> String -> m String +regexReplaceAllByM re transform s = + foldM go (0, s, id) matches >>= \(_, rest, prependdone) -> pure $ prependdone rest + where + matches = getAllMatches $ match (reCompiled re) s :: [(Int, Int)] -- offset and length + go :: (Int,String,String->String) -> (Int,Int) -> m (Int,String,String->String) + go (pos,todo,prepend) (off,len) = + let (prematch, matchandrest) = splitAt (off - pos) todo + (matched, rest) = splitAt len matchandrest + in transform matched >>= \matched' -> pure (off + len, rest, prepend . (prematch++) . (matched' ++)) diff -Nru haskell-hledger-lib-1.18.1/Hledger/Utils/String.hs haskell-hledger-lib-1.19.1/Hledger/Utils/String.hs --- haskell-hledger-lib-1.18.1/Hledger/Utils/String.hs 2020-04-21 18:07:24.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Utils/String.hs 2020-09-07 19:20:58.000000000 +0000 @@ -1,6 +1,7 @@ -- | String formatting helpers, starting to get a bit out of control. module Hledger.Utils.String ( + takeEnd, -- * misc lowercase, uppercase, @@ -12,7 +13,6 @@ singleQuoteIfNeeded, -- quotechars, -- whitespacechars, - escapeQuotes, words', unwords', stripAnsi, @@ -48,14 +48,22 @@ ) where -import Data.Char -import Data.List -import Text.Megaparsec -import Text.Megaparsec.Char +import Data.Char (isDigit, isSpace, toLower, toUpper) +import Data.List (intercalate, transpose) +import Text.Megaparsec (Parsec, (<|>), (), anySingle, between, many, noneOf, + oneOf, parseMaybe, sepBy, takeWhileP, try) +import Text.Megaparsec.Char (char, string) import Text.Printf (printf) import Hledger.Utils.Parse -import Hledger.Utils.Regex + + +-- | Take elements from the end of a list. +takeEnd n l = go (drop n l) l + where + go (_:xs) (_:ys) = go xs ys + go [] r = r + go _ [] = [] lowercase, uppercase :: String -> String lowercase = map toLower @@ -86,7 +94,7 @@ elideLeft :: Int -> String -> String elideLeft width s = - if length s > width then ".." ++ reverse (take (width - 2) $ reverse s) else s + if length s > width then ".." ++ takeEnd (width - 2) s else s elideRight :: Int -> String -> String elideRight width s = @@ -111,8 +119,9 @@ -- | Double-quote this string if it contains whitespace, single quotes -- or double-quotes, escaping the quotes as needed. quoteIfNeeded :: String -> String -quoteIfNeeded s | any (`elem` s) (quotechars++whitespacechars++redirectchars) = "\"" ++ escapeDoubleQuotes s ++ "\"" +quoteIfNeeded s | any (`elem` s) (quotechars++whitespacechars++redirectchars) = show s | otherwise = s + -- | Single-quote this string if it contains whitespace or double-quotes. -- No good for strings containing single quotes. singleQuoteIfNeeded :: String -> String @@ -124,19 +133,13 @@ whitespacechars = " \t\n\r" redirectchars = "<>" -escapeDoubleQuotes :: String -> String -escapeDoubleQuotes = regexReplace "\"" "\"" - -escapeQuotes :: String -> String -escapeQuotes = regexReplace "([\"'])" "\\1" - -- | Quote-aware version of words - don't split on spaces which are inside quotes. -- NB correctly handles "a'b" but not "''a''". Can raise an error if parsing fails. words' :: String -> [String] words' "" = [] words' s = map stripquotes $ fromparse $ parsewithString p s where - p = do ss <- (singleQuotedPattern <|> doubleQuotedPattern <|> pattern) `sepBy` skipSome spacenonewline + p = do ss <- (singleQuotedPattern <|> doubleQuotedPattern <|> pattern) `sepBy` skipNonNewlineSpaces1 -- eof return ss pattern = many (noneOf whitespacechars) @@ -332,12 +335,21 @@ -- (not counted), and line breaks (in a multi-line string, the longest -- line determines the width). strWidth :: String -> Int -strWidth "" = 0 -strWidth s = maximum $ map (foldr (\a b -> charWidth a + b) 0) $ lines s' - where s' = stripAnsi s +strWidth = maximum . (0:) . map (foldr (\a b -> charWidth a + b) 0) . lines . stripAnsi +-- | Strip ANSI escape sequences from a string. +-- +-- >>> stripAnsi "\ESC[31m-1\ESC[m" +-- "-1" stripAnsi :: String -> String -stripAnsi = regexReplace "\ESC\\[([0-9]+;)*([0-9]+)?[ABCDHJKfmsu]" "" +stripAnsi s = case parseMaybe (many $ "" <$ try ansi <|> pure <$> anySingle) s of + Nothing -> error "Bad ansi escape" -- PARTIAL: should not happen + Just xs -> concat xs + where + -- This parses lots of invalid ANSI escape codes, but that should be fine + ansi = string "\ESC[" *> digitSemicolons *> suffix "ansi" :: Parsec CustomErr String Char + digitSemicolons = takeWhileP Nothing (\c -> isDigit c || c == ';') + suffix = oneOf ['A', 'B', 'C', 'D', 'H', 'J', 'K', 'f', 'm', 's', 'u'] -- | Get the designated render width of a character: 0 for a combining -- character, 1 for a regular character, 2 for a wide character. diff -Nru haskell-hledger-lib-1.18.1/Hledger/Utils/Text.hs haskell-hledger-lib-1.19.1/Hledger/Utils/Text.hs --- haskell-hledger-lib-1.18.1/Hledger/Utils/Text.hs 2020-06-06 19:18:32.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Utils/Text.hs 2020-09-02 03:10:45.000000000 +0000 @@ -27,10 +27,6 @@ -- isSingleQuoted, -- isDoubleQuoted, -- -- * single-line layout - textstrip, - textlstrip, - textrstrip, - textchomp, -- elideLeft, textElideRight, -- formatString, @@ -54,12 +50,14 @@ -- fitStringMulti, textPadLeftWide, textPadRightWide, + -- -- * Reading + readDecimal, -- -- * tests tests_Text ) where --- import Data.Char +import Data.Char (digitToInt) import Data.List #if !(MIN_VERSION_base(4,11,0)) import Data.Monoid @@ -78,22 +76,6 @@ -- lowercase = map toLower -- uppercase = map toUpper --- | Remove leading and trailing whitespace. -textstrip :: Text -> Text -textstrip = textlstrip . textrstrip - --- | Remove leading whitespace. -textlstrip :: Text -> Text -textlstrip = T.dropWhile (`elem` (" \t" :: String)) :: Text -> Text -- XXX isSpace ? - --- | Remove trailing whitespace. -textrstrip = T.reverse . textlstrip . T.reverse -textrstrip :: Text -> Text - --- | Remove trailing newlines/carriage returns (and other whitespace). -textchomp :: Text -> Text -textchomp = T.stripEnd - -- stripbrackets :: String -> String -- stripbrackets = dropWhile (`elem` "([") . reverse . dropWhile (`elem` "])") . reverse :: String -> String @@ -420,6 +402,13 @@ -- | otherwise -> 1 +-- | Read a decimal number from a Text. Assumes the input consists only of digit +-- characters. +readDecimal :: Text -> Integer +readDecimal = foldl' step 0 . T.unpack + where step a c = a * 10 + toInteger (digitToInt c) + + tests_Text = tests "Text" [ test "quoteIfSpaced" $ do quoteIfSpaced "a'a" @?= "a'a" diff -Nru haskell-hledger-lib-1.18.1/Hledger/Utils/Tree.hs haskell-hledger-lib-1.19.1/Hledger/Utils/Tree.hs --- haskell-hledger-lib-1.18.1/Hledger/Utils/Tree.hs 2018-05-26 14:46:05.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Utils/Tree.hs 2020-09-01 17:33:33.000000000 +0000 @@ -1,77 +1,18 @@ -module Hledger.Utils.Tree where +module Hledger.Utils.Tree +( FastTree(..) +, treeFromPaths +) where -- import Data.Char import Data.List (foldl') import qualified Data.Map as M -import Data.Tree --- import Text.Megaparsec --- import Text.Printf - -import Hledger.Utils.Regex --- import Hledger.Utils.UTF8IOCompat (error') - --- standard tree helpers - -root = rootLabel -subs = subForest -branches = subForest - --- | List just the leaf nodes of a tree -leaves :: Tree a -> [a] -leaves (Node v []) = [v] -leaves (Node _ branches) = concatMap leaves branches - --- | get the sub-tree rooted at the first (left-most, depth-first) occurrence --- of the specified node value -subtreeat :: Eq a => a -> Tree a -> Maybe (Tree a) -subtreeat v t - | root t == v = Just t - | otherwise = subtreeinforest v $ subs t - --- | get the sub-tree for the specified node value in the first tree in --- forest in which it occurs. -subtreeinforest :: Eq a => a -> [Tree a] -> Maybe (Tree a) -subtreeinforest _ [] = Nothing -subtreeinforest v (t:ts) = case (subtreeat v t) of - Just t' -> Just t' - Nothing -> subtreeinforest v ts - --- | remove all nodes past a certain depth -treeprune :: Int -> Tree a -> Tree a -treeprune 0 t = Node (root t) [] -treeprune d t = Node (root t) (map (treeprune $ d-1) $ branches t) - --- | apply f to all tree nodes -treemap :: (a -> b) -> Tree a -> Tree b -treemap f t = Node (f $ root t) (map (treemap f) $ branches t) - --- | remove all subtrees whose nodes do not fulfill predicate -treefilter :: (a -> Bool) -> Tree a -> Tree a -treefilter f t = Node - (root t) - (map (treefilter f) $ filter (treeany f) $ branches t) - --- | is predicate true in any node of tree ? -treeany :: (a -> Bool) -> Tree a -> Bool -treeany f t = f (root t) || any (treeany f) (branches t) - --- treedrop -- remove the leaves which do fulfill predicate. --- treedropall -- do this repeatedly. - --- | show a compact ascii representation of a tree -showtree :: Show a => Tree a -> String -showtree = unlines . filter (regexMatches "[^ \\|]") . lines . drawTree . treemap show - --- | show a compact ascii representation of a forest -showforest :: Show a => Forest a -> String -showforest = concatMap showtree - -- | An efficient-to-build tree suggested by Cale Gibbard, probably -- better than accountNameTreeFrom. newtype FastTree a = T (M.Map a (FastTree a)) deriving (Show, Eq, Ord) +emptyTree :: FastTree a emptyTree = T M.empty mergeTrees :: (Ord a) => FastTree a -> FastTree a -> FastTree a @@ -83,5 +24,3 @@ treeFromPaths :: (Ord a) => [[a]] -> FastTree a treeFromPaths = foldl' mergeTrees emptyTree . map treeFromPath - - diff -Nru haskell-hledger-lib-1.18.1/Hledger/Utils.hs haskell-hledger-lib-1.19.1/Hledger/Utils.hs --- haskell-hledger-lib-1.18.1/Hledger/Utils.hs 2020-03-19 01:04:29.000000000 +0000 +++ haskell-hledger-lib-1.19.1/Hledger/Utils.hs 2020-08-29 21:29:10.000000000 +0000 @@ -162,7 +162,8 @@ expandPath :: FilePath -> FilePath -> IO FilePath -- general type sig for use in reader parsers expandPath _ "-" = return "-" expandPath curdir p = (if isRelative p then (curdir ) else id) `liftM` expandHomePath p - +-- PARTIAL: + -- | Expand user home path indicated by tilde prefix expandHomePath :: FilePath -> IO FilePath expandHomePath = \case diff -Nru haskell-hledger-lib-1.18.1/hledger_csv.5 haskell-hledger-lib-1.19.1/hledger_csv.5 --- haskell-hledger-lib-1.18.1/hledger_csv.5 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_csv.5 2020-09-07 22:41:25.000000000 +0000 @@ -1,6 +1,6 @@ .\"t -.TH "hledger_csv" "5" "June 2020" "hledger 1.18.1" "hledger User Manuals" +.TH "hledger_csv" "5" "September 2020" "hledger 1.18.99" "hledger User Manuals" @@ -9,9 +9,10 @@ CSV - how hledger reads CSV data, and the CSV rules file format .SH DESCRIPTION .PP -hledger can read CSV (Comma Separated Value/Character Separated Value) -files as if they were journal files, automatically converting each CSV -record into a transaction. +hledger can read CSV files (Character Separated Value - usually comma, +semicolon, or tab) containing dated records as if they were journal +files, automatically converting each CSV record into a transaction. +.PP (To learn about \f[I]writing\f[R] CSV, see CSV output.) .PP We describe each CSV file\[aq]s format with a corresponding \f[I]rules @@ -35,7 +36,7 @@ .PP .TS tab(@); -l l. +lw(30.1n) lw(39.9n). T{ \f[B]\f[CB]skip\f[B]\f[R] T}@T{ @@ -57,9 +58,14 @@ a custom field separator T} T{ -\f[B]\f[CB]if\f[B]\f[R] +\f[B]\f[CB]if\f[B] block\f[R] +T}@T{ +apply some rules to CSV records matched by patterns +T} +T{ +\f[B]\f[CB]if\f[B] table\f[R] T}@T{ -apply some rules to matched CSV records +apply some rules to CSV records matched by patterns, alternate syntax T} T{ \f[B]\f[CB]end\f[B]\f[R] @@ -587,9 +593,19 @@ See TIPS below for more about referencing other fields. .SS \f[C]separator\f[R] .PP -You can use the \f[C]separator\f[R] directive to read other kinds of +You can use the \f[C]separator\f[R] rule to read other kinds of character-separated data. -Eg to read SSV (Semicolon Separated Values), use: +The argument is any single separator character, or the words +\f[C]tab\f[R] or \f[C]space\f[R] (case insensitive). +Eg, for comma-separated values (CSV): +.IP +.nf +\f[C] +separator , +\f[R] +.fi +.PP +or for semicolon-separated values (SSV): .IP .nf \f[C] @@ -597,11 +613,7 @@ \f[R] .fi .PP -The separator directive accepts exactly one single byte character as a -separator. -To specify whitespace characters, you may use the special words -\f[C]TAB\f[R] or \f[C]SPACE\f[R]. -Eg to read TSV (Tab Separated Values), use: +or for tab-separated values (TSV): .IP .nf \f[C] @@ -609,8 +621,11 @@ \f[R] .fi .PP -See also: File Extension. -.SS \f[C]if\f[R] +If the input file has a \f[C].csv\f[R], \f[C].ssv\f[R] or \f[C].tsv\f[R] +file extension (or a \f[C]csv:\f[R], \f[C]ssv:\f[R], \f[C]tsv:\f[R] +prefix), the appropriate separator will be inferred automatically, and +you won\[aq]t need this rule. +.SS \f[C]if\f[R] block .IP .nf \f[C] @@ -630,6 +645,7 @@ applied only to CSV records which match certain patterns. They are often used for customising account names based on transaction descriptions. +.SS Matching the whole record .PP Each MATCHER can be a record matcher, which looks like this: .IP @@ -654,6 +670,7 @@ Eg, if the original record is \f[C]2020-01-01; \[dq]Acme, Inc.\[dq]; 1,000\f[R], the REGEX will actually see \f[C]2020-01-01,Acme, Inc., 1,000\f[R]). +.SS Matching individual fields .PP Or, MATCHER can be a field matcher, like this: .IP @@ -666,10 +683,23 @@ which matches just the content of a particular CSV field. CSVFIELD is a percent sign followed by the field\[aq]s name or column number, like \f[C]%date\f[R] or \f[C]%1\f[R]. +.SS Combining matchers .PP A single matcher can be written on the same line as the \[dq]if\[dq]; or multiple matchers can be written on the following lines, non-indented. -Multiple matchers are OR\[aq]d (any one of them can match). +Multiple matchers are OR\[aq]d (any one of them can match), unless one +begins with an \f[C]&\f[R] symbol, in which case it is AND\[aq]ed with +the previous matcher. +.IP +.nf +\f[C] +if +MATCHER +& MATCHER + RULE +\f[R] +.fi +.SS Rules applied on successful match .PP After the patterns there should be one or more rules to apply, all indented by at least one space. @@ -702,6 +732,76 @@ comment XXX deductible ? check it \f[R] .fi +.SS \f[C]if\f[R] table +.IP +.nf +\f[C] +if,CSVFIELDNAME1,CSVFIELDNAME2,...,CSVFIELDNAMEn +MATCHER1,VALUE11,VALUE12,...,VALUE1n +MATCHER2,VALUE21,VALUE22,...,VALUE2n +MATCHER3,VALUE31,VALUE32,...,VALUE3n + +\f[R] +.fi +.PP +Conditional tables (\[dq]if tables\[dq]) are a different syntax to +specify field assignments that will be applied only to CSV records which +match certain patterns. +.PP +MATCHER could be either field or record matcher, as described above. +When MATCHER matches, values from that row would be assigned to the CSV +fields named on the \f[C]if\f[R] line, in the same order. +.PP +Therefore \f[C]if\f[R] table is exactly equivalent to a sequence of of +\f[C]if\f[R] blocks: +.IP +.nf +\f[C] +if MATCHER1 + CSVFIELDNAME1 VALUE11 + CSVFIELDNAME2 VALUE12 + ... + CSVFIELDNAMEn VALUE1n + +if MATCHER2 + CSVFIELDNAME1 VALUE21 + CSVFIELDNAME2 VALUE22 + ... + CSVFIELDNAMEn VALUE2n + +if MATCHER3 + CSVFIELDNAME1 VALUE31 + CSVFIELDNAME2 VALUE32 + ... + CSVFIELDNAMEn VALUE3n +\f[R] +.fi +.PP +Each line starting with MATCHER should contain enough (possibly empty) +values for all the listed fields. +.PP +Rules would be checked and applied in the order they are listed in the +table and, like with \f[C]if\f[R] blocks, later rules (in the same or +another table) or \f[C]if\f[R] blocks could override the effect of any +rule. +.PP +Instead of \[aq],\[aq] you can use a variety of other non-alphanumeric +characters as a separator. +First character after \f[C]if\f[R] is taken to be the separator for the +rest of the table. +It is the responsibility of the user to ensure that separator does not +occur inside MATCHERs and values - there is no way to escape separator. +.PP +Example: +.IP +.nf +\f[C] +if,account2,comment +atm transaction fee,expenses:business:banking,deductible? check it +%description groceries,expenses:groceries, +2020/01/12.*Plumbing LLC,expenses:house:upkeep,emergency plumbing call-out +\f[R] +.fi .SS \f[C]end\f[R] .PP This rule can be used inside if blocks (only), to make hledger stop @@ -874,14 +974,12 @@ spaces outside the quotes are not allowed .SS File Extension .PP -CSV (\[dq]Character Separated Values\[dq]) files should be named with -one of these filename extensions: \f[C].csv\f[R], \f[C].ssv\f[R], -\f[C].tsv\f[R]. -Or, the file path should be prefixed with one of \f[C]csv:\f[R], -\f[C]ssv:\f[R], \f[C]tsv:\f[R]. -This helps hledger identify the format and show the right error -messages. -For example: +To help hledger identify the format and show the right error messages, +CSV/SSV/TSV files should normally be named with a \f[C].csv\f[R], +\f[C].ssv\f[R] or \f[C].tsv\f[R] filename extension. +Or, the file path should be prefixed with \f[C]csv:\f[R], \f[C]ssv:\f[R] +or \f[C]tsv:\f[R]. +Eg: .IP .nf \f[C] @@ -897,7 +995,8 @@ \f[R] .fi .PP -More about this: Input files in the hledger manual. +You can override the file extension with a separator rule if needed. +See also: Input files in the hledger manual. .SS Reading multiple CSV files .PP If you use multiple \f[C]-f\f[R] options to read multiple CSV files at diff -Nru haskell-hledger-lib-1.18.1/hledger_csv.info haskell-hledger-lib-1.19.1/hledger_csv.info --- haskell-hledger-lib-1.18.1/hledger_csv.info 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_csv.info 2020-09-07 22:41:25.000000000 +0000 @@ -3,15 +3,17 @@  File: hledger_csv.info, Node: Top, Next: EXAMPLES, Up: (dir) -hledger_csv(5) hledger 1.18.1 -***************************** +hledger_csv(5) hledger 1.18.99 +****************************** CSV - how hledger reads CSV data, and the CSV rules file format - hledger can read CSV (Comma Separated Value/Character Separated -Value) files as if they were journal files, automatically converting -each CSV record into a transaction. (To learn about _writing_ CSV, see -CSV output.) + hledger can read CSV files (Character Separated Value - usually +comma, semicolon, or tab) containing dated records as if they were +journal files, automatically converting each CSV record into a +transaction. + + (To learn about _writing_ CSV, see CSV output.) We describe each CSV file's format with a corresponding _rules file_. By default this is named like the CSV file with a '.rules' extension @@ -28,17 +30,24 @@ an overview of the CSV rules; these are described more fully below, after the examples: -*'skip'* skip one or more header lines or matched CSV records -*'fields'* name CSV fields, assign them to hledger fields -*field assign a value to one hledger field, with interpolation -assignment* -*'separator'* a custom field separator -*'if'* apply some rules to matched CSV records -*'end'* skip the remaining CSV records -*'date-format'* describe the format of CSV dates -*'newest-first'* disambiguate record order when there's only one date -*'include'* inline another CSV rules file -*'balance-type'* choose which type of balance assignments to use +*'skip'* skip one or more header lines or + matched CSV records +*'fields'* name CSV fields, assign them to hledger + fields +*field assignment* assign a value to one hledger field, + with interpolation +*'separator'* a custom field separator +*'if' block* apply some rules to CSV records matched + by patterns +*'if' table* apply some rules to CSV records matched + by patterns, alternate syntax +*'end'* skip the remaining CSV records +*'date-format'* describe the format of CSV dates +*'newest-first'* disambiguate record order when there's + only one date +*'include'* inline another CSV rules file +*'balance-type'* choose which type of balance + assignments to use Note, for best error messages when reading CSV files, use a '.csv', '.tsv' or '.ssv' file extension or file prefix - see File Extension @@ -374,7 +383,8 @@ * fields:: * field assignment:: * separator:: -* if:: +* if block:: +* if table:: * end:: * date-format:: * newest-first:: @@ -567,30 +577,35 @@ referencing other fields.  -File: hledger_csv.info, Node: separator, Next: if, Prev: field assignment, Up: CSV RULES +File: hledger_csv.info, Node: separator, Next: if block, Prev: field assignment, Up: CSV RULES 2.4 'separator' =============== -You can use the 'separator' directive to read other kinds of -character-separated data. Eg to read SSV (Semicolon Separated Values), -use: +You can use the 'separator' rule to read other kinds of +character-separated data. The argument is any single separator +character, or the words 'tab' or 'space' (case insensitive). Eg, for +comma-separated values (CSV): + +separator , + + or for semicolon-separated values (SSV): separator ; - The separator directive accepts exactly one single byte character as -a separator. To specify whitespace characters, you may use the special -words 'TAB' or 'SPACE'. Eg to read TSV (Tab Separated Values), use: + or for tab-separated values (TSV): separator TAB - See also: File Extension. + If the input file has a '.csv', '.ssv' or '.tsv' file extension (or a +'csv:', 'ssv:', 'tsv:' prefix), the appropriate separator will be +inferred automatically, and you won't need this rule.  -File: hledger_csv.info, Node: if, Next: end, Prev: separator, Up: CSV RULES +File: hledger_csv.info, Node: if block, Next: if table, Prev: separator, Up: CSV RULES -2.5 'if' -======== +2.5 'if' block +============== if MATCHER RULE @@ -607,7 +622,20 @@ often used for customising account names based on transaction descriptions. - Each MATCHER can be a record matcher, which looks like this: +* Menu: + +* Matching the whole record:: +* Matching individual fields:: +* Combining matchers:: +* Rules applied on successful match:: + + +File: hledger_csv.info, Node: Matching the whole record, Next: Matching individual fields, Up: if block + +2.5.1 Matching the whole record +------------------------------- + +Each MATCHER can be a record matcher, which looks like this: REGEX @@ -624,7 +652,13 @@ the original record is '2020-01-01; "Acme, Inc."; 1,000', the REGEX will actually see '2020-01-01,Acme, Inc., 1,000'). - Or, MATCHER can be a field matcher, like this: + +File: hledger_csv.info, Node: Matching individual fields, Next: Combining matchers, Prev: Matching the whole record, Up: if block + +2.5.2 Matching individual fields +-------------------------------- + +Or, MATCHER can be a field matcher, like this: %CSVFIELD REGEX @@ -632,11 +666,30 @@ is a percent sign followed by the field's name or column number, like '%date' or '%1'. - A single matcher can be written on the same line as the "if"; or + +File: hledger_csv.info, Node: Combining matchers, Next: Rules applied on successful match, Prev: Matching individual fields, Up: if block + +2.5.3 Combining matchers +------------------------ + +A single matcher can be written on the same line as the "if"; or multiple matchers can be written on the following lines, non-indented. -Multiple matchers are OR'd (any one of them can match). +Multiple matchers are OR'd (any one of them can match), unless one +begins with an '&' symbol, in which case it is AND'ed with the previous +matcher. + +if +MATCHER +& MATCHER + RULE + + +File: hledger_csv.info, Node: Rules applied on successful match, Prev: Combining matchers, Up: if block - After the patterns there should be one or more rules to apply, all +2.5.4 Rules applied on successful match +--------------------------------------- + +After the patterns there should be one or more rules to apply, all indented by at least one space. Three kinds of rule are allowed in conditional blocks: @@ -659,9 +712,70 @@ comment XXX deductible ? check it  -File: hledger_csv.info, Node: end, Next: date-format, Prev: if, Up: CSV RULES +File: hledger_csv.info, Node: if table, Next: end, Prev: if block, Up: CSV RULES + +2.6 'if' table +============== + +if,CSVFIELDNAME1,CSVFIELDNAME2,...,CSVFIELDNAMEn +MATCHER1,VALUE11,VALUE12,...,VALUE1n +MATCHER2,VALUE21,VALUE22,...,VALUE2n +MATCHER3,VALUE31,VALUE32,...,VALUE3n + + + Conditional tables ("if tables") are a different syntax to specify +field assignments that will be applied only to CSV records which match +certain patterns. + + MATCHER could be either field or record matcher, as described above. +When MATCHER matches, values from that row would be assigned to the CSV +fields named on the 'if' line, in the same order. + + Therefore 'if' table is exactly equivalent to a sequence of of 'if' +blocks: + +if MATCHER1 + CSVFIELDNAME1 VALUE11 + CSVFIELDNAME2 VALUE12 + ... + CSVFIELDNAMEn VALUE1n + +if MATCHER2 + CSVFIELDNAME1 VALUE21 + CSVFIELDNAME2 VALUE22 + ... + CSVFIELDNAMEn VALUE2n + +if MATCHER3 + CSVFIELDNAME1 VALUE31 + CSVFIELDNAME2 VALUE32 + ... + CSVFIELDNAMEn VALUE3n + + Each line starting with MATCHER should contain enough (possibly +empty) values for all the listed fields. + + Rules would be checked and applied in the order they are listed in +the table and, like with 'if' blocks, later rules (in the same or +another table) or 'if' blocks could override the effect of any rule. + + Instead of ',' you can use a variety of other non-alphanumeric +characters as a separator. First character after 'if' is taken to be +the separator for the rest of the table. It is the responsibility of +the user to ensure that separator does not occur inside MATCHERs and +values - there is no way to escape separator. + + Example: + +if,account2,comment +atm transaction fee,expenses:business:banking,deductible? check it +%description groceries,expenses:groceries, +2020/01/12.*Plumbing LLC,expenses:house:upkeep,emergency plumbing call-out + + +File: hledger_csv.info, Node: end, Next: date-format, Prev: if table, Up: CSV RULES -2.6 'end' +2.7 'end' ========= This rule can be used inside if blocks (only), to make hledger stop @@ -675,7 +789,7 @@  File: hledger_csv.info, Node: date-format, Next: newest-first, Prev: end, Up: CSV RULES -2.7 'date-format' +2.8 'date-format' ================= date-format DATEFMT @@ -706,7 +820,7 @@  File: hledger_csv.info, Node: newest-first, Next: include, Prev: date-format, Up: CSV RULES -2.8 'newest-first' +2.9 'newest-first' ================== hledger always sorts the generated transactions by date. Transactions @@ -728,8 +842,8 @@  File: hledger_csv.info, Node: include, Next: balance-type, Prev: newest-first, Up: CSV RULES -2.9 'include' -============= +2.10 'include' +============== include RULESFILE @@ -751,7 +865,7 @@  File: hledger_csv.info, Node: balance-type, Prev: include, Up: CSV RULES -2.10 'balance-type' +2.11 'balance-type' =================== Balance assertions generated by assigning to balanceN are of the simple @@ -825,11 +939,10 @@ 3.3 File Extension ================== -CSV ("Character Separated Values") files should be named with one of -these filename extensions: '.csv', '.ssv', '.tsv'. Or, the file path -should be prefixed with one of 'csv:', 'ssv:', 'tsv:'. This helps -hledger identify the format and show the right error messages. For -example: +To help hledger identify the format and show the right error messages, +CSV/SSV/TSV files should normally be named with a '.csv', '.ssv' or +'.tsv' filename extension. Or, the file path should be prefixed with +'csv:', 'ssv:' or 'tsv:'. Eg: $ hledger -f foo.ssv print @@ -837,7 +950,8 @@ $ cat foo | hledger -f ssv:- foo - More about this: Input files in the hledger manual. + You can override the file extension with a separator rule if needed. +See also: Input files in the hledger manual.  File: hledger_csv.info, Node: Reading multiple CSV files, Next: Valid transactions, Prev: File Extension, Up: TIPS @@ -1036,74 +1150,84 @@  Tag Table: Node: Top72 -Node: EXAMPLES2178 -Ref: #examples2284 -Node: Basic2492 -Ref: #basic2592 -Node: Bank of Ireland3134 -Ref: #bank-of-ireland3269 -Node: Amazon4731 -Ref: #amazon4849 -Node: Paypal6568 -Ref: #paypal6662 -Node: CSV RULES14306 -Ref: #csv-rules14415 -Node: skip14691 -Ref: #skip14784 -Node: fields15159 -Ref: #fields15281 -Node: Transaction field names16446 -Ref: #transaction-field-names16606 -Node: Posting field names16717 -Ref: #posting-field-names16869 -Node: account16939 -Ref: #account17055 -Node: amount17592 -Ref: #amount17723 -Node: currency18830 -Ref: #currency18965 -Node: balance19171 -Ref: #balance19305 -Node: comment19622 -Ref: #comment19739 -Node: field assignment19902 -Ref: #field-assignment20045 -Node: separator20863 -Ref: #separator20992 -Node: if21403 -Ref: #if21505 -Node: end23661 -Ref: #end23767 -Node: date-format23991 -Ref: #date-format24123 -Node: newest-first24872 -Ref: #newest-first25010 -Node: include25693 -Ref: #include25822 -Node: balance-type26266 -Ref: #balance-type26386 -Node: TIPS27086 -Ref: #tips27168 -Node: Rapid feedback27424 -Ref: #rapid-feedback27541 -Node: Valid CSV28001 -Ref: #valid-csv28131 -Node: File Extension28323 -Ref: #file-extension28475 -Node: Reading multiple CSV files28885 -Ref: #reading-multiple-csv-files29070 -Node: Valid transactions29311 -Ref: #valid-transactions29489 -Node: Deduplicating importing30117 -Ref: #deduplicating-importing30296 -Node: Setting amounts31329 -Ref: #setting-amounts31498 -Node: Setting currency/commodity32485 -Ref: #setting-currencycommodity32677 -Node: Referencing other fields33480 -Ref: #referencing-other-fields33680 -Node: How CSV rules are evaluated34577 -Ref: #how-csv-rules-are-evaluated34750 +Node: EXAMPLES2677 +Ref: #examples2783 +Node: Basic2991 +Ref: #basic3091 +Node: Bank of Ireland3633 +Ref: #bank-of-ireland3768 +Node: Amazon5230 +Ref: #amazon5348 +Node: Paypal7067 +Ref: #paypal7161 +Node: CSV RULES14805 +Ref: #csv-rules14914 +Node: skip15209 +Ref: #skip15302 +Node: fields15677 +Ref: #fields15799 +Node: Transaction field names16964 +Ref: #transaction-field-names17124 +Node: Posting field names17235 +Ref: #posting-field-names17387 +Node: account17457 +Ref: #account17573 +Node: amount18110 +Ref: #amount18241 +Node: currency19348 +Ref: #currency19483 +Node: balance19689 +Ref: #balance19823 +Node: comment20140 +Ref: #comment20257 +Node: field assignment20420 +Ref: #field-assignment20563 +Node: separator21381 +Ref: #separator21516 +Node: if block22056 +Ref: #if-block22181 +Node: Matching the whole record22582 +Ref: #matching-the-whole-record22757 +Node: Matching individual fields23561 +Ref: #matching-individual-fields23765 +Node: Combining matchers23989 +Ref: #combining-matchers24185 +Node: Rules applied on successful match24498 +Ref: #rules-applied-on-successful-match24689 +Node: if table25343 +Ref: #if-table25462 +Node: end27200 +Ref: #end27312 +Node: date-format27536 +Ref: #date-format27668 +Node: newest-first28417 +Ref: #newest-first28555 +Node: include29238 +Ref: #include29369 +Node: balance-type29813 +Ref: #balance-type29933 +Node: TIPS30633 +Ref: #tips30715 +Node: Rapid feedback30971 +Ref: #rapid-feedback31088 +Node: Valid CSV31548 +Ref: #valid-csv31678 +Node: File Extension31870 +Ref: #file-extension32022 +Node: Reading multiple CSV files32451 +Ref: #reading-multiple-csv-files32636 +Node: Valid transactions32877 +Ref: #valid-transactions33055 +Node: Deduplicating importing33683 +Ref: #deduplicating-importing33862 +Node: Setting amounts34895 +Ref: #setting-amounts35064 +Node: Setting currency/commodity36051 +Ref: #setting-currencycommodity36243 +Node: Referencing other fields37046 +Ref: #referencing-other-fields37246 +Node: How CSV rules are evaluated38143 +Ref: #how-csv-rules-are-evaluated38316  End Tag Table diff -Nru haskell-hledger-lib-1.18.1/hledger_csv.txt haskell-hledger-lib-1.19.1/hledger_csv.txt --- haskell-hledger-lib-1.18.1/hledger_csv.txt 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_csv.txt 2020-09-07 22:41:25.000000000 +0000 @@ -7,60 +7,58 @@ CSV - how hledger reads CSV data, and the CSV rules file format DESCRIPTION - hledger can read CSV (Comma Separated Value/Character Separated Value) - files as if they were journal files, automatically converting each CSV - record into a transaction. (To learn about writing CSV, see CSV out- - put.) + hledger can read CSV files (Character Separated Value - usually comma, + semicolon, or tab) containing dated records as if they were journal + files, automatically converting each CSV record into a transaction. + + (To learn about writing CSV, see CSV output.) We describe each CSV file's format with a corresponding rules file. By - default this is named like the CSV file with a .rules extension added. - Eg when reading FILE.csv, hledger also looks for FILE.csv.rules in the - same directory as FILE.csv. You can specify a different rules file - with the --rules-file option. If a rules file is not found, hledger + default this is named like the CSV file with a .rules extension added. + Eg when reading FILE.csv, hledger also looks for FILE.csv.rules in the + same directory as FILE.csv. You can specify a different rules file + with the --rules-file option. If a rules file is not found, hledger will create a sample rules file, which you'll need to adjust. - This file contains rules describing the CSV data (header line, fields + This file contains rules describing the CSV data (header line, fields layout, date format etc.), and how to construct hledger journal entries (transactions) from it. Often there will also be a list of conditional rules for categorising transactions based on their descriptions. Here's an overview of the CSV rules; these are described more fully be- low, after the examples: - skip skip one or more header - lines or matched CSV - records - fields name CSV fields, assign - them to hledger fields - field assignment assign a value to one - hledger field, with inter- - polation - separator a custom field separator - if apply some rules to - matched CSV records - end skip the remaining CSV - records - date-format describe the format of CSV - dates - newest-first disambiguate record order - when there's only one date - include inline another CSV rules - file - balance-type choose which type of bal- - ance assignments to use + skip skip one or more header lines or matched + CSV records + fields name CSV fields, assign them to hledger + fields + field assignment assign a value to one hledger field, + with interpolation + separator a custom field separator + if block apply some rules to CSV records matched + by patterns + if table apply some rules to CSV records matched + by patterns, alternate syntax + end skip the remaining CSV records + date-format describe the format of CSV dates + newest-first disambiguate record order when there's + only one date + include inline another CSV rules file + balance-type choose which type of balance assignments + to use - Note, for best error messages when reading CSV files, use a .csv, .tsv + Note, for best error messages when reading CSV files, use a .csv, .tsv or .ssv file extension or file prefix - see File Extension below. There's an introductory Convert CSV files tutorial on hledger.org. EXAMPLES - Here are some sample hledger CSV rules files. See also the full col- + Here are some sample hledger CSV rules files. See also the full col- lection at: https://github.com/simonmichael/hledger/tree/master/examples/csv Basic - At minimum, the rules file must identify the date and amount fields, - and often it also specifies the date format and how many header lines + At minimum, the rules file must identify the date and amount fields, + and often it also specifies the date format and how many header lines there are. Here's a simple CSV file and a rules file for it: Date, Description, Id, Amount @@ -79,8 +77,8 @@ Default account names are chosen, since we didn't set them. Bank of Ireland - Here's a CSV with two amount fields (Debit and Credit), and a balance - field, which we can use to add balance assertions, which is not neces- + Here's a CSV with two amount fields (Debit and Credit), and a balance + field, which we can use to add balance assertions, which is not neces- sary but provides extra error checking: Date,Details,Debit,Credit,Balance @@ -122,13 +120,13 @@ assets:bank:boi:checking EUR-5.0 = EUR126.0 expenses:unknown EUR5.0 - The balance assertions don't raise an error above, because we're read- - ing directly from CSV, but they will be checked if these entries are + The balance assertions don't raise an error above, because we're read- + ing directly from CSV, but they will be checked if these entries are imported into a journal file. Amazon Here we convert amazon.com order history, and use an if block to gener- - ate a third posting if there's a fee. (In practice you'd probably get + ate a third posting if there's a fee. (In practice you'd probably get this data from your bank instead, but it's an example.) "Date","Type","To/From","Name","Status","Amount","Fees","Transaction ID" @@ -180,7 +178,7 @@ expenses:fees $1.00 Paypal - Here's a real-world rules file for (customised) Paypal CSV, with some + Here's a real-world rules file for (customised) Paypal CSV, with some Paypal-specific rules, and a second rules file included: "Date","Time","TimeZone","Name","Type","Status","Currency","Gross","Fee","Net","From Email Address","To Email Address","Transaction ID","Item Title","Item ID","Reference Txn ID","Receipt ID","Balance","Note" @@ -335,9 +333,9 @@ skip skip N - The word "skip" followed by a number (or no number, meaning 1) tells - hledger to ignore this many non-empty lines preceding the CSV data. - (Empty/blank lines are skipped automatically.) You'll need this when- + The word "skip" followed by a number (or no number, meaning 1) tells + hledger to ignore this many non-empty lines preceding the CSV data. + (Empty/blank lines are skipped automatically.) You'll need this when- ever your CSV data contains header lines. It also has a second purpose: it can be used inside if blocks to ignore @@ -346,27 +344,27 @@ fields fields FIELDNAME1, FIELDNAME2, ... - A fields list (the word "fields" followed by comma-separated field - names) is the quick way to assign CSV field values to hledger fields. + A fields list (the word "fields" followed by comma-separated field + names) is the quick way to assign CSV field values to hledger fields. It does two things: - 1. it names the CSV fields. This is optional, but can be convenient + 1. it names the CSV fields. This is optional, but can be convenient later for interpolating them. 2. when you use a standard hledger field name, it assigns the CSV value to that part of the hledger transaction. - Here's an example that says "use the 1st, 2nd and 4th fields as the - transaction's date, description and amount; name the last two fields + Here's an example that says "use the 1st, 2nd and 4th fields as the + transaction's date, description and amount; name the last two fields for later reference; and ignore the others": fields date, description, , amount, , , somefield, anotherfield - Field names may not contain whitespace. Fields you don't care about - can be left unnamed. Currently there must be least two items (there + Field names may not contain whitespace. Fields you don't care about + can be left unnamed. Currently there must be least two items (there must be at least one comma). - Note, always use comma in the fields list, even if your CSV uses an- + Note, always use comma in the fields list, even if your CSV uses an- other separator character. Here are the standard hledger field/pseudo-field names. For more about @@ -379,52 +377,52 @@ Posting field names account - accountN, where N is 1 to 99, causes a posting to be generated, with + accountN, where N is 1 to 99, causes a posting to be generated, with that account name. - Most often there are two postings, so you'll want to set account1 and - account2. Typically account1 is associated with the CSV file, and is - set once with a top-level assignment, while account2 is set based on + Most often there are two postings, so you'll want to set account1 and + account2. Typically account1 is associated with the CSV file, and is + set once with a top-level assignment, while account2 is set based on each transaction's description, and in conditional blocks. - If a posting's account name is left unset but its amount is set (see - below), a default account name will be chosen (like "expenses:unknown" + If a posting's account name is left unset but its amount is set (see + below), a default account name will be chosen (like "expenses:unknown" or "income:unknown"). amount - amountN sets posting N's amount. If the CSV uses separate fields for - inflows and outflows, you can use amountN-in and amountN-out instead. - By assigning to amount1, amount2, ... etc. you can generate anywhere + amountN sets posting N's amount. If the CSV uses separate fields for + inflows and outflows, you can use amountN-in and amountN-out instead. + By assigning to amount1, amount2, ... etc. you can generate anywhere from 0 to 99 postings. - There is also an older, unnumbered form of these names, suitable for + There is also an older, unnumbered form of these names, suitable for 2-posting transactions, which sets both posting 1's and (negated) post- - ing 2's amount: amount, or amount-in and amount-out. This is still - supported because it keeps pre-hledger-1.17 csv rules files working, - and because it can be more succinct, and because it converts posting + ing 2's amount: amount, or amount-in and amount-out. This is still + supported because it keeps pre-hledger-1.17 csv rules files working, + and because it can be more succinct, and because it converts posting 2's amount to cost if there's a transaction price, which can be useful. If you have an existing rules file using the unnumbered form, you might - want to use the numbered form in certain conditional blocks, without - having to update and retest all the old rules. To facilitate this, - posting 1 ignores amount/amount-in/amount-out if any of + want to use the numbered form in certain conditional blocks, without + having to update and retest all the old rules. To facilitate this, + posting 1 ignores amount/amount-in/amount-out if any of amount1/amount1-in/amount1-out are assigned, and posting 2 ignores them - if any of amount2/amount2-in/amount2-out are assigned, avoiding con- + if any of amount2/amount2-in/amount2-out are assigned, avoiding con- flicts. currency If the CSV has the currency symbol in a separate field (ie, not part of - the amount field), you can use currencyN to prepend it to posting N's + the amount field), you can use currencyN to prepend it to posting N's amount. Or, currency with no number affects all postings. balance - balanceN sets a balance assertion amount (or if the posting amount is + balanceN sets a balance assertion amount (or if the posting amount is left empty, a balance assignment) on posting N. - Also, for compatibility with hledger <1.17: balance with no number is + Also, for compatibility with hledger <1.17: balance with no number is equivalent to balance1. - You can adjust the type of assertion/assignment with the balance-type + You can adjust the type of assertion/assignment with the balance-type rule (see below). comment @@ -436,11 +434,11 @@ field assignment HLEDGERFIELDNAME FIELDVALUE - Instead of or in addition to a fields list, you can use a "field as- - signment" rule to set the value of a single hledger field, by writing - its name (any of the standard hledger field names above) followed by a - text value. The value may contain interpolated CSV fields, referenced - by their 1-based position in the CSV record (%N), or by the name they + Instead of or in addition to a fields list, you can use a "field as- + signment" rule to set the value of a single hledger field, by writing + its name (any of the standard hledger field names above) followed by a + text value. The value may contain interpolated CSV fields, referenced + by their 1-based position in the CSV record (%N), or by the name they were given in the fields list (%CSVFIELDNAME). Some examples: # set the amount to the 4th CSV field, with " USD" appended @@ -449,25 +447,31 @@ # combine three fields to make a comment, containing note: and date: tags comment note: %somefield - %anotherfield, date: %1 - Interpolation strips outer whitespace (so a CSV value like " 1 " be- + Interpolation strips outer whitespace (so a CSV value like " 1 " be- comes 1 when interpolated) (#1051). See TIPS below for more about ref- erencing other fields. separator - You can use the separator directive to read other kinds of character- - separated data. Eg to read SSV (Semicolon Separated Values), use: + You can use the separator rule to read other kinds of character-sepa- + rated data. The argument is any single separator character, or the + words tab or space (case insensitive). Eg, for comma-separated values + (CSV): + + separator , + + or for semicolon-separated values (SSV): separator ; - The separator directive accepts exactly one single byte character as a - separator. To specify whitespace characters, you may use the special - words TAB or SPACE. Eg to read TSV (Tab Separated Values), use: + or for tab-separated values (TSV): separator TAB - See also: File Extension. + If the input file has a .csv, .ssv or .tsv file extension (or a csv:, + ssv:, tsv: prefix), the appropriate separator will be inferred automat- + ically, and you won't need this rule. - if + if block if MATCHER RULE @@ -478,39 +482,49 @@ RULE RULE - Conditional blocks ("if blocks") are a block of rules that are applied - only to CSV records which match certain patterns. They are often used + Conditional blocks ("if blocks") are a block of rules that are applied + only to CSV records which match certain patterns. They are often used for customising account names based on transaction descriptions. + Matching the whole record Each MATCHER can be a record matcher, which looks like this: REGEX - REGEX is a case-insensitive regular expression which tries to match - anywhere within the CSV record. It is a POSIX ERE (extended regular - expression) that also supports GNU word boundaries (\b, \B, \<, \>), + REGEX is a case-insensitive regular expression which tries to match + anywhere within the CSV record. It is a POSIX ERE (extended regular + expression) that also supports GNU word boundaries (\b, \B, \<, \>), and nothing else. If you have trouble, be sure to check our https://hledger.org/hledger.html#regular-expressions doc. - Important note: the record that is matched is not the original record, - but a synthetic one, with any enclosing double quotes (but not enclos- + Important note: the record that is matched is not the original record, + but a synthetic one, with any enclosing double quotes (but not enclos- ing whitespace) removed, and always comma-separated (which means that a - field containing a comma will appear like two fields). Eg, if the + field containing a comma will appear like two fields). Eg, if the original record is 2020-01-01; "Acme, Inc."; 1,000, the REGEX will ac- tually see 2020-01-01,Acme, Inc., 1,000). + Matching individual fields Or, MATCHER can be a field matcher, like this: %CSVFIELD REGEX - which matches just the content of a particular CSV field. CSVFIELD is - a percent sign followed by the field's name or column number, like + which matches just the content of a particular CSV field. CSVFIELD is + a percent sign followed by the field's name or column number, like %date or %1. + Combining matchers A single matcher can be written on the same line as the "if"; or multi- ple matchers can be written on the following lines, non-indented. Mul- - tiple matchers are OR'd (any one of them can match). + tiple matchers are OR'd (any one of them can match), unless one begins + with an & symbol, in which case it is AND'ed with the previous matcher. + + if + MATCHER + & MATCHER + RULE + Rules applied on successful match After the patterns there should be one or more rules to apply, all in- dented by at least one space. Three kinds of rule are allowed in con- ditional blocks: @@ -535,8 +549,63 @@ account2 expenses:business:banking comment XXX deductible ? check it + if table + if,CSVFIELDNAME1,CSVFIELDNAME2,...,CSVFIELDNAMEn + MATCHER1,VALUE11,VALUE12,...,VALUE1n + MATCHER2,VALUE21,VALUE22,...,VALUE2n + MATCHER3,VALUE31,VALUE32,...,VALUE3n + + + Conditional tables ("if tables") are a different syntax to specify + field assignments that will be applied only to CSV records which match + certain patterns. + + MATCHER could be either field or record matcher, as described above. + When MATCHER matches, values from that row would be assigned to the CSV + fields named on the if line, in the same order. + + Therefore if table is exactly equivalent to a sequence of of if blocks: + + if MATCHER1 + CSVFIELDNAME1 VALUE11 + CSVFIELDNAME2 VALUE12 + ... + CSVFIELDNAMEn VALUE1n + + if MATCHER2 + CSVFIELDNAME1 VALUE21 + CSVFIELDNAME2 VALUE22 + ... + CSVFIELDNAMEn VALUE2n + + if MATCHER3 + CSVFIELDNAME1 VALUE31 + CSVFIELDNAME2 VALUE32 + ... + CSVFIELDNAMEn VALUE3n + + Each line starting with MATCHER should contain enough (possibly empty) + values for all the listed fields. + + Rules would be checked and applied in the order they are listed in the + table and, like with if blocks, later rules (in the same or another ta- + ble) or if blocks could override the effect of any rule. + + Instead of ',' you can use a variety of other non-alphanumeric charac- + ters as a separator. First character after if is taken to be the sepa- + rator for the rest of the table. It is the responsibility of the user + to ensure that separator does not occur inside MATCHERs and values - + there is no way to escape separator. + + Example: + + if,account2,comment + atm transaction fee,expenses:business:banking,deductible? check it + %description groceries,expenses:groceries, + 2020/01/12.*Plumbing LLC,expenses:house:upkeep,emergency plumbing call-out + end - This rule can be used inside if blocks (only), to make hledger stop + This rule can be used inside if blocks (only), to make hledger stop reading this CSV file and move on to the next input file, or to command execution. Eg: @@ -547,10 +616,10 @@ date-format date-format DATEFMT - This is a helper for the date (and date2) fields. If your CSV dates - are not formatted like YYYY-MM-DD, YYYY/MM/DD or YYYY.MM.DD, you'll - need to add a date-format rule describing them with a strptime date - parsing pattern, which must parse the CSV date value completely. Some + This is a helper for the date (and date2) fields. If your CSV dates + are not formatted like YYYY-MM-DD, YYYY/MM/DD or YYYY.MM.DD, you'll + need to add a date-format rule describing them with a strptime date + parsing pattern, which must parse the CSV date value completely. Some examples: # MM/DD/YY @@ -572,15 +641,15 @@ mat.html#v:formatTime newest-first - hledger always sorts the generated transactions by date. Transactions - on the same date should appear in the same order as their CSV records, - as hledger can usually auto-detect whether the CSV's normal order is + hledger always sorts the generated transactions by date. Transactions + on the same date should appear in the same order as their CSV records, + as hledger can usually auto-detect whether the CSV's normal order is oldest first or newest first. But if all of the following are true: - o the CSV might sometimes contain just one day of data (all records + o the CSV might sometimes contain just one day of data (all records having the same date) - o the CSV records are normally in reverse chronological order (newest + o the CSV records are normally in reverse chronological order (newest at the top) o and you care about preserving the order of same-day transactions @@ -593,9 +662,9 @@ include include RULESFILE - This includes the contents of another CSV rules file at this point. - RULESFILE is an absolute file path or a path relative to the current - file's directory. This can be useful for sharing common rules between + This includes the contents of another CSV rules file at this point. + RULESFILE is an absolute file path or a path relative to the current + file's directory. This can be useful for sharing common rules between several rules files, eg: # someaccount.csv.rules @@ -610,10 +679,10 @@ balance-type Balance assertions generated by assigning to balanceN are of the simple - = type by default, which is a single-commodity, subaccount-excluding + = type by default, which is a single-commodity, subaccount-excluding assertion. You may find the subaccount-including variants more useful, - eg if you have created some virtual subaccounts of checking to help - with budgeting. You can select a different type of assertion with the + eg if you have created some virtual subaccounts of checking to help + with budgeting. You can select a different type of assertion with the balance-type rule: # balance assertions will consider all commodities and all subaccounts @@ -628,19 +697,19 @@ TIPS Rapid feedback - It's a good idea to get rapid feedback while creating/troubleshooting + It's a good idea to get rapid feedback while creating/troubleshooting CSV rules. Here's a good way, using entr from http://eradman.com/entr- project : $ ls foo.csv* | entr bash -c 'echo ----; hledger -f foo.csv print desc:SOMEDESC' - A desc: query (eg) is used to select just one, or a few, transactions - of interest. "bash -c" is used to run multiple commands, so we can - echo a separator each time the command re-runs, making it easier to + A desc: query (eg) is used to select just one, or a few, transactions + of interest. "bash -c" is used to run multiple commands, so we can + echo a separator each time the command re-runs, making it easier to read the output. Valid CSV - hledger accepts CSV conforming to RFC 4180. When CSV values are en- + hledger accepts CSV conforming to RFC 4180. When CSV values are en- closed in quotes, note: o they must be double quotes (not single quotes) @@ -648,10 +717,10 @@ o spaces outside the quotes are not allowed File Extension - CSV ("Character Separated Values") files should be named with one of - these filename extensions: .csv, .ssv, .tsv. Or, the file path should - be prefixed with one of csv:, ssv:, tsv:. This helps hledger identify - the format and show the right error messages. For example: + To help hledger identify the format and show the right error messages, + CSV/SSV/TSV files should normally be named with a .csv, .ssv or .tsv + filename extension. Or, the file path should be prefixed with csv:, + ssv: or tsv:. Eg: $ hledger -f foo.ssv print @@ -659,7 +728,8 @@ $ cat foo | hledger -f ssv:- foo - More about this: Input files in the hledger manual. + You can override the file extension with a separator rule if needed. + See also: Input files in the hledger manual. Reading multiple CSV files If you use multiple -f options to read multiple CSV files at once, @@ -852,4 +922,4 @@ -hledger 1.18.1 June 2020 hledger_csv(5) +hledger 1.18.99 September 2020 hledger_csv(5) diff -Nru haskell-hledger-lib-1.18.1/hledger_journal.5 haskell-hledger-lib-1.19.1/hledger_journal.5 --- haskell-hledger-lib-1.18.1/hledger_journal.5 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_journal.5 2020-09-07 22:41:25.000000000 +0000 @@ -1,6 +1,6 @@ .\"t -.TH "hledger_journal" "5" "June 2020" "hledger 1.18.1" "hledger User Manuals" +.TH "hledger_journal" "5" "September 2020" "hledger 1.18.99" "hledger User Manuals" @@ -670,10 +670,13 @@ \f[R] .fi .RE +.IP "4." 3 +Like 1, but the \f[C]\[at]\f[R] is parenthesised, i.e. +\f[C](\[at])\f[R]; this is for compatibility with Ledger journals +(Virtual posting costs), and is equivalent to 1 in hledger. .IP "5." 3 -Like 1 and 2, but the \f[C]\[at]\f[R] or \f[C]\[at]\[at]\f[R] is -parenthesised; this is for compatibility with Ledger journals (Virtual -posting costs), and in hledger is equivalent to 1 and 2. +Like 2, but as in 4 the \f[C]\[at]\[at]\f[R] is parenthesised, i.e. +\f[C](\[at]\[at])\f[R]; in hledger, this is equivalent to 2. .PP Use the \f[C]-B/--cost\f[R] flag to convert amounts to their transaction price\[aq]s commodity, if any. @@ -1381,62 +1384,140 @@ .fi .SS Account types .PP -hledger recognises five types (or classes) of account: Asset, Liability, -Equity, Revenue, Expense. -This is used by a few accounting-aware reports such as balancesheet, -incomestatement and cashflow. -.SS Auto-detected account types +hledger recognises five main types of account, corresponding to the +account classes in the accounting equation: .PP -If you name your top-level accounts with some variation of -\f[C]assets\f[R], \f[C]liabilities\f[R]/\f[C]debts\f[R], -\f[C]equity\f[R], \f[C]revenues\f[R]/\f[C]income\f[R], or -\f[C]expenses\f[R], their types are detected automatically. -.SS Account types declared with tags +\f[C]Asset\f[R], \f[C]Liability\f[R], \f[C]Equity\f[R], +\f[C]Revenue\f[R], \f[C]Expense\f[R]. .PP -More generally, you can declare an account\[aq]s type with an account -directive, by writing a \f[C]type:\f[R] tag in a comment, followed by -one of the words \f[C]Asset\f[R], \f[C]Liability\f[R], \f[C]Equity\f[R], -\f[C]Revenue\f[R], \f[C]Expense\f[R], or one of the letters -\f[C]ALERX\f[R] (case insensitive): +These account types are important for controlling which accounts appear +in the balancesheet, balancesheetequity, incomestatement reports (and +probably for other things in future). +.PP +Additionally, we recognise the \f[C]Cash\f[R] type, which is also an +\f[C]Asset\f[R], and which causes accounts to appear in the cashflow +report. +(\[dq]Cash\[dq] here means liquid assets, eg bank balances but typically +not investments or receivables.) +.SS Declaring account types +.PP +Generally, to make these reports work you should declare your top-level +accounts and their types, using account directives with \f[C]type:\f[R] +tags. +.PP +The tag\[aq]s value should be one of: \f[C]Asset\f[R], +\f[C]Liability\f[R], \f[C]Equity\f[R], \f[C]Revenue\f[R], +\f[C]Expense\f[R], \f[C]Cash\f[R], \f[C]A\f[R], \f[C]L\f[R], +\f[C]E\f[R], \f[C]R\f[R], \f[C]X\f[R], \f[C]C\f[R] (all case +insensitive). +The type is inherited by all subaccounts except where they override it. +Here\[aq]s a complete example: .IP .nf \f[C] -account assets ; type:Asset -account liabilities ; type:Liability -account equity ; type:Equity -account revenues ; type:Revenue -account expenses ; type:Expense +account assets ; type: Asset +account assets:bank ; type: Cash +account assets:cash ; type: Cash +account liabilities ; type: Liability +account equity ; type: Equity +account revenues ; type: Revenue +account expenses ; type: Expense \f[R] .fi -.SS Account types declared with account type codes +.SS Auto-detected account types +.PP +If you happen to use common english top-level account names, you may not +need to declare account types, as they will be detected automatically +using the following rules: +.PP +.TS +tab(@); +l l. +T{ +If name matches regular expression: +T}@T{ +account type is: +T} +_ +T{ +\f[C]\[ha]assets?(:|$)\f[R] +T}@T{ +\f[C]Asset\f[R] +T} +T{ +\f[C]\[ha](debts?|liabilit(y|ies))(:|$)\f[R] +T}@T{ +\f[C]Liability\f[R] +T} +T{ +\f[C]\[ha]equity(:|$)\f[R] +T}@T{ +\f[C]Equity\f[R] +T} +T{ +\f[C]\[ha](income|revenue)s?(:|$)\f[R] +T}@T{ +\f[C]Revenue\f[R] +T} +T{ +\f[C]\[ha]expenses?(:|$)\f[R] +T}@T{ +\f[C]Expense\f[R] +T} +.TE +.PP +.TS +tab(@); +lw(56.9n) lw(13.1n). +T{ +If account type is \f[C]Asset\f[R] and name does not contain regular +expression: +T}@T{ +account type is: +T} +_ +T{ +\f[C](investment|receivable|:A/R|:fixed)\f[R] +T}@T{ +\f[C]Cash\f[R] +T} +.TE +.PP +Even so, explicit declarations may be a good idea, for clarity and +predictability. +.SS Interference from auto-detected account types .PP -Or, you can write one of those letters separated from the account name -by two or more spaces, but this should probably be considered deprecated -as of hledger 1.13: +If you assign any account type, it\[aq]s a good idea to assign all of +them, to prevent any confusion from mixing declared and auto-detected +types. +Although it\[aq]s unlikely to happen in real life, here\[aq]s an +example: with the following journal, \f[C]balancesheetequity\f[R] shows +\[dq]liabilities\[dq] in both Liabilities and Equity sections. +Declaring another account as \f[C]type:Liability\f[R] would fix it: .IP .nf \f[C] -account assets A -account liabilities L -account equity E -account revenues R -account expenses X +account liabilities ; type:Equity + +2020-01-01 + assets 1 + liabilities 1 + equity -2 \f[R] .fi -.SS Overriding auto-detected types +.SS Old account type syntax .PP -If you ever override the types of those auto-detected english account -names mentioned above, you might need to help the reports a bit. -Eg: +In some hledger journals you might instead see this old syntax (the +letters ALERX, separated from the account name by two or more spaces); +this is deprecated and may be removed soon: .IP .nf \f[C] -; make \[dq]liabilities\[dq] not have the liability type - who knows why -account liabilities ; type:E - -; we need to ensure some other account has the liability type, -; otherwise balancesheet would still show \[dq]liabilities\[dq] under Liabilities -account - ; type:L +account assets A +account liabilities L +account equity E +account revenues R +account expenses X \f[R] .fi .SS Account display order @@ -1568,7 +1649,7 @@ .IP .nf \f[C] -alias /\[ha](.+):bank:([\[ha]:]+)(.*)/ = \[rs]1:\[rs]2 \[rs]3 +alias /\[ha](.+):bank:([\[ha]:]+):(.*)/ = \[rs]1:\[rs]2 \[rs]3 ; rewrites \[dq]assets:bank:wells fargo:checking\[dq] to \[dq]assets:wells fargo checking\[dq] \f[R] .fi diff -Nru haskell-hledger-lib-1.18.1/hledger_journal.info haskell-hledger-lib-1.19.1/hledger_journal.info --- haskell-hledger-lib-1.18.1/hledger_journal.info 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_journal.info 2020-09-07 22:41:25.000000000 +0000 @@ -4,8 +4,8 @@  File: hledger_journal.info, Node: Top, Up: (dir) -hledger_journal(5) hledger 1.18.1 -********************************* +hledger_journal(5) hledger 1.18.99 +********************************** Journal - hledger's default file format, representing a General Journal @@ -593,9 +593,12 @@ assets:euros €100 ; one hundred euros purchased assets:dollars $-135 ; for $135 - 4. 5. Like 1 and 2, but the '@' or '@@' is parenthesised; this is for - compatibility with Ledger journals (Virtual posting costs), and in - hledger is equivalent to 1 and 2. + 4. Like 1, but the '@' is parenthesised, i.e. '(@)'; this is for + compatibility with Ledger journals (Virtual posting costs), and is + equivalent to 1 in hledger. + + 5. Like 2, but as in 4 the '@@' is parenthesised, i.e. '(@@)'; in + hledger, this is equivalent to 2. Use the '-B/--cost' flag to convert amounts to their transaction price's commodity, if any. (mnemonic: "B" is from "cost Basis", as in @@ -1250,27 +1253,72 @@ 1.13.8.3 Account types ...................... -hledger recognises five types (or classes) of account: Asset, Liability, -Equity, Revenue, Expense. This is used by a few accounting-aware -reports such as balancesheet, incomestatement and cashflow. -Auto-detected account types If you name your top-level accounts with -some variation of 'assets', 'liabilities'/'debts', 'equity', -'revenues'/'income', or 'expenses', their types are detected -automatically. Account types declared with tags More generally, you can -declare an account's type with an account directive, by writing a -'type:' tag in a comment, followed by one of the words 'Asset', -'Liability', 'Equity', 'Revenue', 'Expense', or one of the letters -'ALERX' (case insensitive): - -account assets ; type:Asset -account liabilities ; type:Liability -account equity ; type:Equity -account revenues ; type:Revenue -account expenses ; type:Expense - - Account types declared with account type codes Or, you can write one -of those letters separated from the account name by two or more spaces, -but this should probably be considered deprecated as of hledger 1.13: +hledger recognises five main types of account, corresponding to the +account classes in the accounting equation: + + 'Asset', 'Liability', 'Equity', 'Revenue', 'Expense'. + + These account types are important for controlling which accounts +appear in the balancesheet, balancesheetequity, incomestatement reports +(and probably for other things in future). + + Additionally, we recognise the 'Cash' type, which is also an 'Asset', +and which causes accounts to appear in the cashflow report. ("Cash" +here means liquid assets, eg bank balances but typically not investments +or receivables.) Declaring account types Generally, to make these +reports work you should declare your top-level accounts and their types, +using account directives with 'type:' tags. + + The tag's value should be one of: 'Asset', 'Liability', 'Equity', +'Revenue', 'Expense', 'Cash', 'A', 'L', 'E', 'R', 'X', 'C' (all case +insensitive). The type is inherited by all subaccounts except where +they override it. Here's a complete example: + +account assets ; type: Asset +account assets:bank ; type: Cash +account assets:cash ; type: Cash +account liabilities ; type: Liability +account equity ; type: Equity +account revenues ; type: Revenue +account expenses ; type: Expense + + Auto-detected account types If you happen to use common english +top-level account names, you may not need to declare account types, as +they will be detected automatically using the following rules: + +If name matches regular account +expression: type is: +------------------------------------------------- +'^assets?(:|$)' 'Asset' +'^(debts?|liabilit(y|ies))(:|$)' 'Liability' +'^equity(:|$)' 'Equity' +'^(income|revenue)s?(:|$)' 'Revenue' +'^expenses?(:|$)' 'Expense' + +If account type is 'Asset' and name does not contain account type +regular expression: is: +-------------------------------------------------------------------------- +'(investment|receivable|:A/R|:fixed)' 'Cash' + + Even so, explicit declarations may be a good idea, for clarity and +predictability. Interference from auto-detected account types If you +assign any account type, it's a good idea to assign all of them, to +prevent any confusion from mixing declared and auto-detected types. +Although it's unlikely to happen in real life, here's an example: with +the following journal, 'balancesheetequity' shows "liabilities" in both +Liabilities and Equity sections. Declaring another account as +'type:Liability' would fix it: + +account liabilities ; type:Equity + +2020-01-01 + assets 1 + liabilities 1 + equity -2 + + Old account type syntax In some hledger journals you might instead +see this old syntax (the letters ALERX, separated from the account name +by two or more spaces); this is deprecated and may be removed soon: account assets A account liabilities L @@ -1278,17 +1326,6 @@ account revenues R account expenses X - Overriding auto-detected types If you ever override the types of -those auto-detected english account names mentioned above, you might -need to help the reports a bit. Eg: - -; make "liabilities" not have the liability type - who knows why -account liabilities ; type:E - -; we need to ensure some other account has the liability type, -; otherwise balancesheet would still show "liabilities" under Liabilities -account - ; type:L -  File: hledger_journal.info, Node: Account display order, Prev: Account types, Up: Declaring accounts @@ -1406,7 +1443,7 @@ REPLACEMENT. If REGEX contains parenthesised match groups, these can be referenced by the usual numeric backreferences in REPLACEMENT. Eg: -alias /^(.+):bank:([^:]+)(.*)/ = \1:\2 \3 +alias /^(.+):bank:([^:]+):(.*)/ = \1:\2 \3 ; rewrites "assets:bank:wells fargo:checking" to "assets:wells fargo checking" Also note that REPLACEMENT continues to the end of line (or on @@ -1828,124 +1865,124 @@  Tag Table: Node: Top76 -Node: Transactions1873 -Ref: #transactions1965 -Node: Dates3249 -Ref: #dates3348 -Node: Simple dates3413 -Ref: #simple-dates3539 -Node: Secondary dates4048 -Ref: #secondary-dates4202 -Node: Posting dates5538 -Ref: #posting-dates5667 -Node: Status7039 -Ref: #status7160 -Node: Description8868 -Ref: #description9002 -Node: Payee and note9322 -Ref: #payee-and-note9436 -Node: Comments9771 -Ref: #comments9897 -Node: Tags11091 -Ref: #tags11206 -Node: Postings12599 -Ref: #postings12727 -Node: Virtual postings13753 -Ref: #virtual-postings13870 -Node: Account names15175 -Ref: #account-names15316 -Node: Amounts15803 -Ref: #amounts15942 -Node: Digit group marks17050 -Ref: #digit-group-marks17198 -Node: Amount display style18136 -Ref: #amount-display-style18290 -Node: Transaction prices19727 -Ref: #transaction-prices19899 -Node: Lot prices and lot dates22231 -Ref: #lot-prices-and-lot-dates22428 -Node: Balance assertions22916 -Ref: #balance-assertions23102 -Node: Assertions and ordering24135 -Ref: #assertions-and-ordering24323 -Node: Assertions and included files25023 -Ref: #assertions-and-included-files25266 -Node: Assertions and multiple -f options25599 -Ref: #assertions-and-multiple--f-options25855 -Node: Assertions and commodities25987 -Ref: #assertions-and-commodities26219 -Node: Assertions and prices27376 -Ref: #assertions-and-prices27590 -Node: Assertions and subaccounts28030 -Ref: #assertions-and-subaccounts28259 -Node: Assertions and virtual postings28583 -Ref: #assertions-and-virtual-postings28825 -Node: Assertions and precision28967 -Ref: #assertions-and-precision29160 -Node: Balance assignments29427 -Ref: #balance-assignments29601 -Node: Balance assignments and prices30765 -Ref: #balance-assignments-and-prices30937 -Node: Directives31161 -Ref: #directives31320 -Node: Directives and multiple files37011 -Ref: #directives-and-multiple-files37194 -Node: Comment blocks37858 -Ref: #comment-blocks38041 -Node: Including other files38217 -Ref: #including-other-files38397 -Node: Default year39321 -Ref: #default-year39490 -Node: Declaring commodities39897 -Ref: #declaring-commodities40080 -Node: Default commodity41886 -Ref: #default-commodity42072 -Node: Declaring market prices42961 -Ref: #declaring-market-prices43156 -Node: Declaring accounts44013 -Ref: #declaring-accounts44199 -Node: Account comments45124 -Ref: #account-comments45287 -Node: Account subdirectives45711 -Ref: #account-subdirectives45906 -Node: Account types46219 -Ref: #account-types46403 -Node: Account display order48042 -Ref: #account-display-order48212 -Node: Rewriting accounts49363 -Ref: #rewriting-accounts49548 -Node: Basic aliases50305 -Ref: #basic-aliases50451 -Node: Regex aliases51155 -Ref: #regex-aliases51327 -Node: Combining aliases52045 -Ref: #combining-aliases52238 -Node: Aliases and multiple files53514 -Ref: #aliases-and-multiple-files53723 -Node: end aliases54302 -Ref: #end-aliases54459 -Node: Default parent account54560 -Ref: #default-parent-account54728 -Node: Periodic transactions55612 -Ref: #periodic-transactions55787 -Node: Periodic rule syntax57659 -Ref: #periodic-rule-syntax57865 -Node: Two spaces between period expression and description!58569 -Ref: #two-spaces-between-period-expression-and-description58888 -Node: Forecasting with periodic transactions59572 -Ref: #forecasting-with-periodic-transactions59877 -Node: Budgeting with periodic transactions61932 -Ref: #budgeting-with-periodic-transactions62171 -Node: Auto postings62620 -Ref: #auto-postings62760 -Node: Auto postings and multiple files64939 -Ref: #auto-postings-and-multiple-files65143 -Node: Auto postings and dates65352 -Ref: #auto-postings-and-dates65626 -Node: Auto postings and transaction balancing / inferred amounts / balance assertions65801 -Ref: #auto-postings-and-transaction-balancing-inferred-amounts-balance-assertions66152 -Node: Auto posting tags66494 -Ref: #auto-posting-tags66709 +Node: Transactions1875 +Ref: #transactions1967 +Node: Dates3251 +Ref: #dates3350 +Node: Simple dates3415 +Ref: #simple-dates3541 +Node: Secondary dates4050 +Ref: #secondary-dates4204 +Node: Posting dates5540 +Ref: #posting-dates5669 +Node: Status7041 +Ref: #status7162 +Node: Description8870 +Ref: #description9004 +Node: Payee and note9324 +Ref: #payee-and-note9438 +Node: Comments9773 +Ref: #comments9899 +Node: Tags11093 +Ref: #tags11208 +Node: Postings12601 +Ref: #postings12729 +Node: Virtual postings13755 +Ref: #virtual-postings13872 +Node: Account names15177 +Ref: #account-names15318 +Node: Amounts15805 +Ref: #amounts15944 +Node: Digit group marks17052 +Ref: #digit-group-marks17200 +Node: Amount display style18138 +Ref: #amount-display-style18292 +Node: Transaction prices19729 +Ref: #transaction-prices19901 +Node: Lot prices and lot dates22332 +Ref: #lot-prices-and-lot-dates22529 +Node: Balance assertions23017 +Ref: #balance-assertions23203 +Node: Assertions and ordering24236 +Ref: #assertions-and-ordering24424 +Node: Assertions and included files25124 +Ref: #assertions-and-included-files25367 +Node: Assertions and multiple -f options25700 +Ref: #assertions-and-multiple--f-options25956 +Node: Assertions and commodities26088 +Ref: #assertions-and-commodities26320 +Node: Assertions and prices27477 +Ref: #assertions-and-prices27691 +Node: Assertions and subaccounts28131 +Ref: #assertions-and-subaccounts28360 +Node: Assertions and virtual postings28684 +Ref: #assertions-and-virtual-postings28926 +Node: Assertions and precision29068 +Ref: #assertions-and-precision29261 +Node: Balance assignments29528 +Ref: #balance-assignments29702 +Node: Balance assignments and prices30866 +Ref: #balance-assignments-and-prices31038 +Node: Directives31262 +Ref: #directives31421 +Node: Directives and multiple files37112 +Ref: #directives-and-multiple-files37295 +Node: Comment blocks37959 +Ref: #comment-blocks38142 +Node: Including other files38318 +Ref: #including-other-files38498 +Node: Default year39422 +Ref: #default-year39591 +Node: Declaring commodities39998 +Ref: #declaring-commodities40181 +Node: Default commodity41987 +Ref: #default-commodity42173 +Node: Declaring market prices43062 +Ref: #declaring-market-prices43257 +Node: Declaring accounts44114 +Ref: #declaring-accounts44300 +Node: Account comments45225 +Ref: #account-comments45388 +Node: Account subdirectives45812 +Ref: #account-subdirectives46007 +Node: Account types46320 +Ref: #account-types46504 +Node: Account display order49550 +Ref: #account-display-order49720 +Node: Rewriting accounts50871 +Ref: #rewriting-accounts51056 +Node: Basic aliases51813 +Ref: #basic-aliases51959 +Node: Regex aliases52663 +Ref: #regex-aliases52835 +Node: Combining aliases53554 +Ref: #combining-aliases53747 +Node: Aliases and multiple files55023 +Ref: #aliases-and-multiple-files55232 +Node: end aliases55811 +Ref: #end-aliases55968 +Node: Default parent account56069 +Ref: #default-parent-account56237 +Node: Periodic transactions57121 +Ref: #periodic-transactions57296 +Node: Periodic rule syntax59168 +Ref: #periodic-rule-syntax59374 +Node: Two spaces between period expression and description!60078 +Ref: #two-spaces-between-period-expression-and-description60397 +Node: Forecasting with periodic transactions61081 +Ref: #forecasting-with-periodic-transactions61386 +Node: Budgeting with periodic transactions63441 +Ref: #budgeting-with-periodic-transactions63680 +Node: Auto postings64129 +Ref: #auto-postings64269 +Node: Auto postings and multiple files66448 +Ref: #auto-postings-and-multiple-files66652 +Node: Auto postings and dates66861 +Ref: #auto-postings-and-dates67135 +Node: Auto postings and transaction balancing / inferred amounts / balance assertions67310 +Ref: #auto-postings-and-transaction-balancing-inferred-amounts-balance-assertions67661 +Node: Auto posting tags68003 +Ref: #auto-posting-tags68218  End Tag Table diff -Nru haskell-hledger-lib-1.18.1/hledger_journal.txt haskell-hledger-lib-1.19.1/hledger_journal.txt --- haskell-hledger-lib-1.18.1/hledger_journal.txt 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_journal.txt 2020-09-07 22:41:25.000000000 +0000 @@ -472,11 +472,14 @@ assets:euros EUR100 ; one hundred euros purchased assets:dollars $-135 ; for $135 - 5. Like 1 and 2, but the @ or @@ is parenthesised; this is for compati- - bility with Ledger journals (Virtual posting costs), and in hledger - is equivalent to 1 and 2. + 4. Like 1, but the @ is parenthesised, i.e. (@); this is for compati- + bility with Ledger journals (Virtual posting costs), and is equiva- + lent to 1 in hledger. - Use the -B/--cost flag to convert amounts to their transaction price's + 5. Like 2, but as in 4 the @@ is parenthesised, i.e. (@@); in hledger, + this is equivalent to 2. + + Use the -B/--cost flag to convert amounts to their transaction price's commodity, if any. (mnemonic: "B" is from "cost Basis", as in Ledger). Eg here is how -B affects the balance report for the example above: @@ -487,8 +490,8 @@ $-135 assets:dollars $135 assets:euros # <- the euros' cost - Note -B is sensitive to the order of postings when a transaction price - is inferred: the inferred price will be in the commodity of the last + Note -B is sensitive to the order of postings when a transaction price + is inferred: the inferred price will be in the commodity of the last amount. So if example 3's postings are reversed, while the transaction is equivalent, -B shows something different: @@ -501,18 +504,18 @@ EUR100 assets:euros Lot prices and lot dates - Ledger allows another kind of price, lot price (four variants: {UNIT- + Ledger allows another kind of price, lot price (four variants: {UNIT- PRICE}, {{TOTALPRICE}}, {=FIXEDUNITPRICE}, {{=FIXEDTOTALPRICE}}), and/or a lot date ([DATE]) to be specified. These are normally used to - select a lot when selling investments. hledger will parse these, for - compatibility with Ledger journals, but currently ignores them. A - transaction price, lot price and/or lot date may appear in any order, + select a lot when selling investments. hledger will parse these, for + compatibility with Ledger journals, but currently ignores them. A + transaction price, lot price and/or lot date may appear in any order, after the posting amount and before the balance assertion if any. Balance assertions - hledger supports Ledger-style balance assertions in journal files. - These look like, for example, = EXPECTEDBALANCE following a posting's - amount. Eg here we assert the expected dollar balance in accounts a + hledger supports Ledger-style balance assertions in journal files. + These look like, for example, = EXPECTEDBALANCE following a posting's + amount. Eg here we assert the expected dollar balance in accounts a and b after each posting: 2013/1/1 @@ -524,32 +527,32 @@ b $-1 =$-2 After reading a journal file, hledger will check all balance assertions - and report an error if any of them fail. Balance assertions can pro- - tect you from, eg, inadvertently disrupting reconciled balances while - cleaning up old entries. You can disable them temporarily with the + and report an error if any of them fail. Balance assertions can pro- + tect you from, eg, inadvertently disrupting reconciled balances while + cleaning up old entries. You can disable them temporarily with the -I/--ignore-assertions flag, which can be useful for troubleshooting or - for reading Ledger files. (Note: this flag currently does not disable + for reading Ledger files. (Note: this flag currently does not disable balance assignments, below). Assertions and ordering - hledger sorts an account's postings and assertions first by date and - then (for postings on the same day) by parse order. Note this is dif- + hledger sorts an account's postings and assertions first by date and + then (for postings on the same day) by parse order. Note this is dif- ferent from Ledger, which sorts assertions only by parse order. (Also, - Ledger assertions do not see the accumulated effect of repeated post- + Ledger assertions do not see the accumulated effect of repeated post- ings to the same account within a transaction.) So, hledger balance assertions keep working if you reorder differently- - dated transactions within the journal. But if you reorder same-dated - transactions or postings, assertions might break and require updating. + dated transactions within the journal. But if you reorder same-dated + transactions or postings, assertions might break and require updating. This order dependence does bring an advantage: precise control over the order of postings and assertions within a day, so you can assert intra- day balances. Assertions and included files - With included files, things are a little more complicated. Including - preserves the ordering of postings and assertions. If you have multi- - ple postings to an account on the same day, split across different - files, and you also want to assert the account's balance on the same + With included files, things are a little more complicated. Including + preserves the ordering of postings and assertions. If you have multi- + ple postings to an account on the same day, split across different + files, and you also want to assert the account's balance on the same day, you'll have to put the assertion in the right file. Assertions and multiple -f options @@ -557,15 +560,15 @@ -f options. Use include or concatenate the files instead. Assertions and commodities - The asserted balance must be a simple single-commodity amount, and in - fact the assertion checks only this commodity's balance within the - (possibly multi-commodity) account balance. This is how assertions + The asserted balance must be a simple single-commodity amount, and in + fact the assertion checks only this commodity's balance within the + (possibly multi-commodity) account balance. This is how assertions work in Ledger also. We could call this a "partial" balance assertion. To assert the balance of more than one commodity in an account, you can write multiple postings, each asserting one commodity's balance. - You can make a stronger "total" balance assertion by writing a double + You can make a stronger "total" balance assertion by writing a double equals sign (== EXPECTEDBALANCE). This asserts that there are no other unasserted commodities in the account (or, that their balance is 0). @@ -585,7 +588,7 @@ a 0 == $1 It's not yet possible to make a complete assertion about a balance that - has multiple commodities. One workaround is to isolate each commodity + has multiple commodities. One workaround is to isolate each commodity into its own subaccount: 2013/1/1 @@ -599,21 +602,21 @@ a:euro 0 == 1EUR Assertions and prices - Balance assertions ignore transaction prices, and should normally be + Balance assertions ignore transaction prices, and should normally be written without one: 2019/1/1 (a) $1 @ EUR1 = $1 - We do allow prices to be written there, however, and print shows them, - even though they don't affect whether the assertion passes or fails. - This is for backward compatibility (hledger's close command used to - generate balance assertions with prices), and because balance assign- + We do allow prices to be written there, however, and print shows them, + even though they don't affect whether the assertion passes or fails. + This is for backward compatibility (hledger's close command used to + generate balance assertions with prices), and because balance assign- ments do use them (see below). Assertions and subaccounts - The balance assertions above (= and ==) do not count the balance from - subaccounts; they check the account's exclusive balance only. You can + The balance assertions above (= and ==) do not count the balance from + subaccounts; they check the account's exclusive balance only. You can assert the balance including subaccounts by writing =* or ==*, eg: 2019/1/1 @@ -627,16 +630,16 @@ tual. They are not affected by the --real/-R flag or real: query. Assertions and precision - Balance assertions compare the exactly calculated amounts, which are - not always what is shown by reports. Eg a commodity directive may - limit the display precision, but this will not affect balance asser- + Balance assertions compare the exactly calculated amounts, which are + not always what is shown by reports. Eg a commodity directive may + limit the display precision, but this will not affect balance asser- tions. Balance assertion failure messages show exact amounts. Balance assignments - Ledger-style balance assignments are also supported. These are like - balance assertions, but with no posting amount on the left side of the - equals sign; instead it is calculated automatically so as to satisfy - the assertion. This can be a convenience during data entry, eg when + Ledger-style balance assignments are also supported. These are like + balance assertions, but with no posting amount on the left side of the + equals sign; instead it is calculated automatically so as to satisfy + the assertion. This can be a convenience during data entry, eg when setting opening balances: ; starting a new journal, set asset account balances @@ -654,14 +657,14 @@ expenses:misc The calculated amount depends on the account's balance in the commodity - at that point (which depends on the previously-dated postings of the - commodity to that account since the last balance assertion or assign- + at that point (which depends on the previously-dated postings of the + commodity to that account since the last balance assertion or assign- ment). Note that using balance assignments makes your journal a little less explicit; to know the exact amount posted, you have to run hledger or do the calculations yourself, instead of just reading it. Balance assignments and prices - A transaction price in a balance assignment will cause the calculated + A transaction price in a balance assignment will cause the calculated amount to have that price attached: 2019/1/1 @@ -672,87 +675,88 @@ (a) $1 @ EUR2 = $1 @ EUR2 Directives - A directive is a line in the journal beginning with a special keyword, + A directive is a line in the journal beginning with a special keyword, that influences how the journal is processed. hledger's directives are based on a subset of Ledger's, but there are many differences (and also some differences between hledger versions). Directives' behaviour and interactions can get a little bit complex, so - here is a table summarising the directives and their effects, with + here is a table summarising the directives and their effects, with links to more detailed docs. - direc- end di- subdi- purpose can affect (as of + direc- end di- subdi- purpose can affect (as of tive rective rec- 2018/06) tives ------------------------------------------------------------------------------------ - account any document account names, de- all entries in all - text clare account types & dis- files, before or + account any document account names, de- all entries in all + text clare account types & dis- files, before or play order after alias end rewrite account names following in- aliases line/included en- - tries until end of + tries until end of current file or end directive - apply end apply prepend a common parent to following in- + apply end apply prepend a common parent to following in- account account account names line/included en- - tries until end of + tries until end of current file or end directive comment end com- ignore part of journal following in- ment line/included en- - tries until end of + tries until end of current file or end directive - commod- format declare a commodity and its number notation: + commod- format declare a commodity and its number notation: ity number notation & display following entries style in that commodity - in all files; dis- + in all files; dis- play style: amounts of that commodity in reports - D declare a commodity to be default commodity: + D declare a commodity to be default commodity: used for commodityless following commod- - amounts, and its number no- ityless entries un- - tation & display style til end of current - file; number nota- + amounts, and its number no- ityless entries un- + tation & display style til end of current + file; number nota- tion: following en- - tries in that com- + tries in that com- modity until end of - current file; dis- + current file; dis- play style: amounts of that commodity in reports include include entries/directives what the included from another file directives affect P declare a market price for a amounts of that - commodity commodity in re- - ports, when -V is + commodity commodity in re- + ports, when -V is used - Y declare a year for yearless following in- + + Y declare a year for yearless following in- dates line/included en- - tries until end of + tries until end of current file - = declare an auto posting all entries in par- - rule, adding postings to ent/current/child + = declare an auto posting all entries in par- + rule, adding postings to ent/current/child other transactions files (but not sib- ling files, see #1212) And some definitions: - subdi- optional indented directive line immediately following a parent + subdi- optional indented directive line immediately following a parent rec- directive tive number how to interpret numbers when parsing journal entries (the iden- - nota- tity of the decimal separator character). (Currently each com- + nota- tity of the decimal separator character). (Currently each com- tion modity can have its own notation, even in the same file.) - dis- how to display amounts of a commodity in reports (symbol side + dis- how to display amounts of a commodity in reports (symbol side play and spacing, digit groups, decimal separator, decimal places) style - direc- which entries and (when there are multiple files) which files + direc- which entries and (when there are multiple files) which files tive are affected by a directive scope @@ -761,35 +765,35 @@ ports). Some directives have multiple effects. Directives and multiple files - If you use multiple -f/--file options, or the include directive, - hledger will process multiple input files. But note that directives + If you use multiple -f/--file options, or the include directive, + hledger will process multiple input files. But note that directives which affect input (see above) typically last only until the end of the file in which they occur. This may seem inconvenient, but it's intentional; it makes reports sta- - ble and deterministic, independent of the order of input. Otherwise - you could see different numbers if you happened to write -f options in - a different order, or if you moved includes around while cleaning up + ble and deterministic, independent of the order of input. Otherwise + you could see different numbers if you happened to write -f options in + a different order, or if you moved includes around while cleaning up your files. - It can be surprising though; for example, it means that alias direc- + It can be surprising though; for example, it means that alias direc- tives do not affect parent or sibling files (see below). Comment blocks - A line containing just comment starts a commented region of the file, + A line containing just comment starts a commented region of the file, and a line containing just end comment (or the end of the current file) ends it. See also comments. Including other files - You can pull in the content of additional files by writing an include + You can pull in the content of additional files by writing an include directive, like this: include FILEPATH - Only journal files can include, and only journal, timeclock or timedot + Only journal files can include, and only journal, timeclock or timedot files can be included (not CSV files, currently). - If the file path does not begin with a slash, it is relative to the + If the file path does not begin with a slash, it is relative to the current file's folder. A tilde means home directory, eg: include ~/main.journal. @@ -798,17 +802,17 @@ *.journal. There is limited support for recursive wildcards: **/ (the slash is re- - quired) matches 0 or more subdirectories. It's not super convenient - since you have to avoid include cycles and including directories, but + quired) matches 0 or more subdirectories. It's not super convenient + since you have to avoid include cycles and including directories, but this can be done, eg: include */**/*.journal. The path may also be prefixed to force a specific file format, overrid- - ing the file extension (as described in hledger.1 -> Input files): in- + ing the file extension (as described in hledger.1 -> Input files): in- clude timedot:~/notes/2020*.md. Default year - You can set a default year to be used for subsequent dates which don't - specify a year. This is a line beginning with Y followed by the year. + You can set a default year to be used for subsequent dates which don't + specify a year. This is a line beginning with Y followed by the year. Eg: Y2009 ; set default year to 2009 @@ -830,19 +834,19 @@ Declaring commodities The commodity directive has several functions: - 1. It declares commodities which may be used in the journal. This is + 1. It declares commodities which may be used in the journal. This is currently not enforced, but can serve as documentation. - 2. It declares what decimal mark character (period or comma) to expect - when parsing input - useful to disambiguate international number - formats in your data. (Without this, hledger will parse both 1,000 + 2. It declares what decimal mark character (period or comma) to expect + when parsing input - useful to disambiguate international number + formats in your data. (Without this, hledger will parse both 1,000 and 1.000 as 1). - 3. It declares the amount display style to use in output - decimal and + 3. It declares the amount display style to use in output - decimal and digit group marks, number of decimal places, symbol placement etc. - You are likely to run into one of the problems solved by commodity di- - rectives, sooner or later, so it's a good idea to just always use them + You are likely to run into one of the problems solved by commodity di- + rectives, sooner or later, so it's a good idea to just always use them to declare your commodities. A commodity directive is just the word commodity followed by an amount. @@ -855,8 +859,8 @@ ; separating thousands with comma. commodity 1,000.0000 AAAA - or on multiple lines, using the "format" subdirective. (In this case - the commodity symbol appears twice and should be the same in both + or on multiple lines, using the "format" subdirective. (In this case + the commodity symbol appears twice and should be the same in both places.): ; commodity SYMBOL @@ -869,22 +873,22 @@ format INR 1,00,00,000.00 The quantity of the amount does not matter; only the format is signifi- - cant. The number must include a decimal mark: either a period or a + cant. The number must include a decimal mark: either a period or a comma, followed by 0 or more decimal digits. - Note hledger normally uses banker's rounding, so 0.5 displayed with + Note hledger normally uses banker's rounding, so 0.5 displayed with zero decimal digits is "0". (More at Amount display style.) Default commodity - The D directive sets a default commodity, to be used for amounts with- + The D directive sets a default commodity, to be used for amounts with- out a commodity symbol (ie, plain numbers). This commodity will be ap- plied to all subsequent commodity-less amounts, or until the next D di- rective. (Note, this is different from Ledger's D.) - For compatibility/historical reasons, D also acts like a commodity di- + For compatibility/historical reasons, D also acts like a commodity di- rective, setting the commodity's display style (for output) and decimal mark (for parsing input). As with commodity, the amount must always be - written with a decimal mark (period or comma). If both directives are + written with a decimal mark (period or comma). If both directives are used, commodity's style takes precedence. The syntax is D AMOUNT. Eg: @@ -898,9 +902,9 @@ b Declaring market prices - The P directive declares a market price, which is an exchange rate be- - tween two commodities on a certain date. (In Ledger, they are called - "historical prices".) These are often obtained from a stock exchange, + The P directive declares a market price, which is an exchange rate be- + tween two commodities on a certain date. (In Ledger, they are called + "historical prices".) These are often obtained from a stock exchange, cryptocurrency exchange, or the foreign exchange market. Here is the format: @@ -911,16 +915,16 @@ o COMMODITYA is the symbol of the commodity being priced - o COMMODITYBAMOUNT is an amount (symbol and quantity) in a second com- + o COMMODITYBAMOUNT is an amount (symbol and quantity) in a second com- modity, giving the price in commodity B of one unit of commodity A. - These two market price directives say that one euro was worth 1.35 US + These two market price directives say that one euro was worth 1.35 US dollars during 2009, and $1.40 from 2010 onward: P 2009/1/1 EUR $1.35 P 2010/1/1 EUR $1.40 - The -V, -X and --value flags use these market prices to show amount + The -V, -X and --value flags use these market prices to show amount values in another commodity. See Valuation. Declaring accounts @@ -930,20 +934,20 @@ o They can document your intended chart of accounts, providing a refer- ence. - o They can store extra information about accounts (account numbers, + o They can store extra information about accounts (account numbers, notes, etc.) - o They can help hledger know your accounts' types (asset, liability, - equity, revenue, expense), useful for reports like balancesheet and + o They can help hledger know your accounts' types (asset, liability, + equity, revenue, expense), useful for reports like balancesheet and incomestatement. - o They control account display order in reports, allowing non-alpha- + o They control account display order in reports, allowing non-alpha- betic sorting (eg Revenues to appear above Expenses). - o They help with account name completion in the add command, hledger- + o They help with account name completion in the add command, hledger- iadd, hledger-web, ledger-mode etc. - The simplest form is just the word account followed by a hledger-style + The simplest form is just the word account followed by a hledger-style account name, eg: account assets:bank:checking @@ -951,7 +955,7 @@ Account comments Comments, beginning with a semicolon, can be added: - o on the same line, after two or more spaces (because ; is allowed in + o on the same line, after two or more spaces (because ; is allowed in account names) o on the next lines, indented @@ -965,7 +969,7 @@ Same-line comments are not supported by Ledger, or hledger <1.13. Account subdirectives - We also allow (and ignore) Ledger-style indented subdirectives, just + We also allow (and ignore) Ledger-style indented subdirectives, just for compatibility.: account assets:bank:checking @@ -978,31 +982,79 @@ [LEDGER-STYLE SUBDIRECTIVES, IGNORED] Account types - hledger recognises five types (or classes) of account: Asset, Liabil- - ity, Equity, Revenue, Expense. This is used by a few accounting-aware - reports such as balancesheet, incomestatement and cashflow. + hledger recognises five main types of account, corresponding to the ac- + count classes in the accounting equation: + + Asset, Liability, Equity, Revenue, Expense. + + These account types are important for controlling which accounts appear + in the balancesheet, balancesheetequity, incomestatement reports (and + probably for other things in future). + + Additionally, we recognise the Cash type, which is also an Asset, and + which causes accounts to appear in the cashflow report. ("Cash" here + means liquid assets, eg bank balances but typically not investments or + receivables.) + + Declaring account types + Generally, to make these reports work you should declare your top-level + accounts and their types, using account directives with type: tags. + + The tag's value should be one of: Asset, Liability, Equity, Revenue, + Expense, Cash, A, L, E, R, X, C (all case insensitive). The type is + inherited by all subaccounts except where they override it. Here's a + complete example: + + account assets ; type: Asset + account assets:bank ; type: Cash + account assets:cash ; type: Cash + account liabilities ; type: Liability + account equity ; type: Equity + account revenues ; type: Revenue + account expenses ; type: Expense Auto-detected account types - If you name your top-level accounts with some variation of assets, lia- - bilities/debts, equity, revenues/income, or expenses, their types are - detected automatically. - - Account types declared with tags - More generally, you can declare an account's type with an account di- - rective, by writing a type: tag in a comment, followed by one of the - words Asset, Liability, Equity, Revenue, Expense, or one of the letters - ALERX (case insensitive): - - account assets ; type:Asset - account liabilities ; type:Liability - account equity ; type:Equity - account revenues ; type:Revenue - account expenses ; type:Expense - - Account types declared with account type codes - Or, you can write one of those letters separated from the account name - by two or more spaces, but this should probably be considered depre- - cated as of hledger 1.13: + If you happen to use common english top-level account names, you may + not need to declare account types, as they will be detected automati- + cally using the following rules: + + If name matches regular account type is: + expression: + ---------------------------------------------- + ^assets?(:|$) Asset + ^(debts?|lia- Liability + bilit(y|ies))(:|$) + ^equity(:|$) Equity + ^(income|revenue)s?(:|$) Revenue + ^expenses?(:|$) Expense + + If account type is Asset and name does not contain regu- account type + lar expression: is: + -------------------------------------------------------------------------- + (investment|receivable|:A/R|:fixed) Cash + + Even so, explicit declarations may be a good idea, for clarity and pre- + dictability. + + Interference from auto-detected account types + If you assign any account type, it's a good idea to assign all of them, + to prevent any confusion from mixing declared and auto-detected types. + Although it's unlikely to happen in real life, here's an example: with + the following journal, balancesheetequity shows "liabilities" in both + Liabilities and Equity sections. Declaring another account as type:Li- + ability would fix it: + + account liabilities ; type:Equity + + 2020-01-01 + assets 1 + liabilities 1 + equity -2 + + Old account type syntax + In some hledger journals you might instead see this old syntax (the + letters ALERX, separated from the account name by two or more spaces); + this is deprecated and may be removed soon: account assets A account liabilities L @@ -1010,20 +1062,9 @@ account revenues R account expenses X - Overriding auto-detected types - If you ever override the types of those auto-detected english account - names mentioned above, you might need to help the reports a bit. Eg: - - ; make "liabilities" not have the liability type - who knows why - account liabilities ; type:E - - ; we need to ensure some other account has the liability type, - ; otherwise balancesheet would still show "liabilities" under Liabilities - account - ; type:L - Account display order - Account directives also set the order in which accounts are displayed, - eg in reports, the hledger-ui accounts screen, and the hledger-web + Account directives also set the order in which accounts are displayed, + eg in reports, the hledger-ui accounts screen, and the hledger-web sidebar. By default accounts are listed in alphabetical order. But if you have these account directives in the journal: @@ -1045,20 +1086,20 @@ Undeclared accounts, if any, are displayed last, in alphabetical order. - Note that sorting is done at each level of the account tree (within - each group of sibling accounts under the same parent). And currently, + Note that sorting is done at each level of the account tree (within + each group of sibling accounts under the same parent). And currently, this directive: account other:zoo - would influence the position of zoo among other's subaccounts, but not + would influence the position of zoo among other's subaccounts, but not the position of other among the top-level accounts. This means: - o you will sometimes declare parent accounts (eg account other above) + o you will sometimes declare parent accounts (eg account other above) that you don't intend to post to, just to customize their display or- der - o sibling accounts stay together (you couldn't display x:y in between + o sibling accounts stay together (you couldn't display x:y in between a:b and a:c). Rewriting accounts @@ -1076,14 +1117,14 @@ o customising reports Account aliases also rewrite account names in account directives. They - do not affect account names being entered via hledger add or hledger- + do not affect account names being entered via hledger add or hledger- web. See also Rewrite account names. Basic aliases - To set an account alias, use the alias directive in your journal file. - This affects all subsequent journal entries in the current file or its + To set an account alias, use the alias directive in your journal file. + This affects all subsequent journal entries in the current file or its included files. The spaces around the = are optional: alias OLD = NEW @@ -1091,49 +1132,49 @@ Or, you can use the --alias 'OLD=NEW' option on the command line. This affects all entries. It's useful for trying out aliases interactively. - OLD and NEW are case sensitive full account names. hledger will re- - place any occurrence of the old account name with the new one. Subac- + OLD and NEW are case sensitive full account names. hledger will re- + place any occurrence of the old account name with the new one. Subac- counts are also affected. Eg: alias checking = assets:bank:wells fargo:checking ; rewrites "checking" to "assets:bank:wells fargo:checking", or "checking:a" to "assets:bank:wells fargo:checking:a" Regex aliases - There is also a more powerful variant that uses a regular expression, + There is also a more powerful variant that uses a regular expression, indicated by the forward slashes: alias /REGEX/ = REPLACEMENT or --alias '/REGEX/=REPLACEMENT'. - REGEX is a case-insensitive regular expression. Anywhere it matches - inside an account name, the matched part will be replaced by REPLACE- - MENT. If REGEX contains parenthesised match groups, these can be ref- + REGEX is a case-insensitive regular expression. Anywhere it matches + inside an account name, the matched part will be replaced by REPLACE- + MENT. If REGEX contains parenthesised match groups, these can be ref- erenced by the usual numeric backreferences in REPLACEMENT. Eg: - alias /^(.+):bank:([^:]+)(.*)/ = \1:\2 \3 + alias /^(.+):bank:([^:]+):(.*)/ = \1:\2 \3 ; rewrites "assets:bank:wells fargo:checking" to "assets:wells fargo checking" - Also note that REPLACEMENT continues to the end of line (or on command - line, to end of option argument), so it can contain trailing white- + Also note that REPLACEMENT continues to the end of line (or on command + line, to end of option argument), so it can contain trailing white- space. Combining aliases - You can define as many aliases as you like, using journal directives + You can define as many aliases as you like, using journal directives and/or command line options. - Recursive aliases - where an account name is rewritten by one alias, - then by another alias, and so on - are allowed. Each alias sees the + Recursive aliases - where an account name is rewritten by one alias, + then by another alias, and so on - are allowed. Each alias sees the effect of previously applied aliases. - In such cases it can be important to understand which aliases will be - applied and in which order. For (each account name in) each journal + In such cases it can be important to understand which aliases will be + applied and in which order. For (each account name in) each journal entry, we apply: - 1. alias directives preceding the journal entry, most recently parsed + 1. alias directives preceding the journal entry, most recently parsed first (ie, reading upward from the journal entry, bottom to top) - 2. --alias options, in the order they appeared on the command line + 2. --alias options, in the order they appeared on the command line (left to right). In other words, for (an account name in) a given journal entry: @@ -1144,20 +1185,20 @@ o aliases defined after/below the entry do not affect it. - This gives nearby aliases precedence over distant ones, and helps pro- - vide semantic stability - aliases will keep working the same way inde- + This gives nearby aliases precedence over distant ones, and helps pro- + vide semantic stability - aliases will keep working the same way inde- pendent of which files are being read and in which order. - In case of trouble, adding --debug=6 to the command line will show + In case of trouble, adding --debug=6 to the command line will show which aliases are being applied when. Aliases and multiple files - As explained at Directives and multiple files, alias directives do not + As explained at Directives and multiple files, alias directives do not affect parent or sibling files. Eg in this command, hledger -f a.aliases -f b.journal - account aliases defined in a.aliases will not affect b.journal. In- + account aliases defined in a.aliases will not affect b.journal. In- cluding the aliases doesn't work either: include a.aliases @@ -1179,14 +1220,14 @@ include c.journal ; also affected end aliases - You can clear (forget) all currently defined aliases with the end + You can clear (forget) all currently defined aliases with the end aliases directive: end aliases Default parent account - You can specify a parent account which will be prepended to all ac- - counts within a section of the journal. Use the apply account and end + You can specify a parent account which will be prepended to all ac- + counts within a section of the journal. Use the apply account and end apply account directives like so: apply account home @@ -1203,7 +1244,7 @@ home:food $10 home:cash $-10 - If end apply account is omitted, the effect lasts to the end of the + If end apply account is omitted, the effect lasts to the end of the file. Included files are also affected, eg: apply account business @@ -1212,50 +1253,50 @@ apply account personal include personal.journal - Prior to hledger 1.0, legacy account and end spellings were also sup- + Prior to hledger 1.0, legacy account and end spellings were also sup- ported. - A default parent account also affects account directives. It does not - affect account names being entered via hledger add or hledger-web. If - account aliases are present, they are applied after the default parent + A default parent account also affects account directives. It does not + affect account names being entered via hledger add or hledger-web. If + account aliases are present, they are applied after the default parent account. Periodic transactions - Periodic transaction rules describe transactions that recur. They al- - low hledger to generate temporary future transactions to help with - forecasting, so you don't have to write out each one in the journal, - and it's easy to try out different forecasts. Secondly, they are also + Periodic transaction rules describe transactions that recur. They al- + low hledger to generate temporary future transactions to help with + forecasting, so you don't have to write out each one in the journal, + and it's easy to try out different forecasts. Secondly, they are also used to define the budgets shown in budget reports. - Periodic transactions can be a little tricky, so before you use them, + Periodic transactions can be a little tricky, so before you use them, read this whole section - or at least these tips: - 1. Two spaces accidentally added or omitted will cause you trouble - + 1. Two spaces accidentally added or omitted will cause you trouble - read about this below. - 2. For troubleshooting, show the generated transactions with hledger - print --forecast tag:generated or hledger register --forecast + 2. For troubleshooting, show the generated transactions with hledger + print --forecast tag:generated or hledger register --forecast tag:generated. - 3. Forecasted transactions will begin only after the last non-fore- + 3. Forecasted transactions will begin only after the last non-fore- casted transaction's date. - 4. Forecasted transactions will end 6 months from today, by default. + 4. Forecasted transactions will end 6 months from today, by default. See below for the exact start/end rules. - 5. period expressions can be tricky. Their documentation needs im- + 5. period expressions can be tricky. Their documentation needs im- provement, but is worth studying. - 6. Some period expressions with a repeating interval must begin on a - natural boundary of that interval. Eg in weekly from DATE, DATE - must be a monday. ~ weekly from 2019/10/1 (a tuesday) will give an + 6. Some period expressions with a repeating interval must begin on a + natural boundary of that interval. Eg in weekly from DATE, DATE + must be a monday. ~ weekly from 2019/10/1 (a tuesday) will give an error. 7. Other period expressions with an interval are automatically expanded - to cover a whole number of that interval. (This is done to improve + to cover a whole number of that interval. (This is done to improve reports, but it also affects periodic transactions. Yes, it's a bit - inconsistent with the above.) Eg: ~ every 10th day of month from - 2020/01, which is equivalent to ~ every 10th day of month from + inconsistent with the above.) Eg: ~ every 10th day of month from + 2020/01, which is equivalent to ~ every 10th day of month from 2020/01/01, will be adjusted to start on 2019/12/10. Periodic rule syntax @@ -1267,17 +1308,17 @@ expenses:rent $2000 assets:bank:checking - There is an additional constraint on the period expression: the start - date must fall on a natural boundary of the interval. Eg monthly from + There is an additional constraint on the period expression: the start + date must fall on a natural boundary of the interval. Eg monthly from 2018/1/1 is valid, but monthly from 2018/1/15 is not. - Partial or relative dates (M/D, D, tomorrow, last week) in the period - expression can work (useful or not). They will be relative to today's - date, unless a Y default year directive is in effect, in which case + Partial or relative dates (M/D, D, tomorrow, last week) in the period + expression can work (useful or not). They will be relative to today's + date, unless a Y default year directive is in effect, in which case they will be relative to Y/1/1. Two spaces between period expression and description! - If the period expression is followed by a transaction description, + If the period expression is followed by a transaction description, these must be separated by two or more spaces. This helps hledger know where the period expression ends, so that descriptions can not acciden- tally alter their meaning, as in this example: @@ -1291,68 +1332,68 @@ So, - o Do write two spaces between your period expression and your transac- + o Do write two spaces between your period expression and your transac- tion description, if any. - o Don't accidentally write two spaces in the middle of your period ex- + o Don't accidentally write two spaces in the middle of your period ex- pression. Forecasting with periodic transactions - The --forecast flag activates any periodic transaction rules in the - journal. They will generate temporary recurring transactions, which - are not saved in the journal, but will appear in all reports (eg + The --forecast flag activates any periodic transaction rules in the + journal. They will generate temporary recurring transactions, which + are not saved in the journal, but will appear in all reports (eg print). This can be useful for estimating balances into the future, or - experimenting with different scenarios. Or, it can be used as a data + experimenting with different scenarios. Or, it can be used as a data entry aid: describe recurring transactions, and every so often copy the output of print --forecast into the journal. - These transactions will have an extra tag indicating which periodic + These transactions will have an extra tag indicating which periodic rule generated them: generated-transaction:~ PERIODICEXPR. And a simi- - lar, hidden tag (beginning with an underscore) which, because it's - never displayed by print, can be used to match transactions generated + lar, hidden tag (beginning with an underscore) which, because it's + never displayed by print, can be used to match transactions generated "just now": _generated-transaction:~ PERIODICEXPR. - Periodic transactions are generated within some forecast period. By + Periodic transactions are generated within some forecast period. By default, this o begins on the later of o the report start date if specified with -b/-p/date: - o the day after the latest normal (non-periodic) transaction in the + o the day after the latest normal (non-periodic) transaction in the journal, or today if there are no normal transactions. - o ends on the report end date if specified with -e/-p/date:, or 6 + o ends on the report end date if specified with -e/-p/date:, or 6 months (180 days) from today. - This means that periodic transactions will begin only after the latest - recorded transaction. And a recorded transaction dated in the future - can prevent generation of periodic transactions. (You can avoid that + This means that periodic transactions will begin only after the latest + recorded transaction. And a recorded transaction dated in the future + can prevent generation of periodic transactions. (You can avoid that by writing the future transaction as a one-time periodic rule instead - put tilde before the date, eg ~ YYYY-MM-DD ...). Or, you can set your own arbitrary "forecast period", which can overlap - recorded transactions, and need not be in the future, by providing an - option argument, like --forecast=PERIODEXPR. Note the equals sign is + recorded transactions, and need not be in the future, by providing an + option argument, like --forecast=PERIODEXPR. Note the equals sign is required, a space won't work. PERIODEXPR is a period expression, which - can specify the start date, end date, or both, like in a date: query. - (See also hledger.1 -> Report start & end date). Some examples: + can specify the start date, end date, or both, like in a date: query. + (See also hledger.1 -> Report start & end date). Some examples: --forecast=202001-202004, --forecast=jan-, --forecast=2020. Budgeting with periodic transactions - With the --budget flag, currently supported by the balance command, - each periodic transaction rule declares recurring budget goals for the - specified accounts. Eg the first example above declares a goal of - spending $2000 on rent (and also, a goal of depositing $2000 into - checking) every month. Goals and actual performance can then be com- + With the --budget flag, currently supported by the balance command, + each periodic transaction rule declares recurring budget goals for the + specified accounts. Eg the first example above declares a goal of + spending $2000 on rent (and also, a goal of depositing $2000 into + checking) every month. Goals and actual performance can then be com- pared in budget reports. - For more details, see: balance: Budget report and Budgeting and Fore- + For more details, see: balance: Budget report and Budgeting and Fore- casting. Auto postings - "Automated postings" or "auto postings" are extra postings which get - added automatically to transactions which match certain queries, de- + "Automated postings" or "auto postings" are extra postings which get + added automatically to transactions which match certain queries, de- fined by "auto posting rules", when you use the --auto flag. An auto posting rule looks a bit like a transaction: @@ -1362,27 +1403,27 @@ ... ACCOUNT [AMOUNT] - except the first line is an equals sign (mnemonic: = suggests match- - ing), followed by a query (which matches existing postings), and each - "posting" line describes a posting to be generated, and the posting + except the first line is an equals sign (mnemonic: = suggests match- + ing), followed by a query (which matches existing postings), and each + "posting" line describes a posting to be generated, and the posting amounts can be: - o a normal amount with a commodity symbol, eg $2. This will be used + o a normal amount with a commodity symbol, eg $2. This will be used as-is. o a number, eg 2. The commodity symbol (if any) from the matched post- ing will be added to this. - o a numeric multiplier, eg *2 (a star followed by a number N). The + o a numeric multiplier, eg *2 (a star followed by a number N). The matched posting's amount (and total price, if any) will be multiplied by N. - o a multiplier with a commodity symbol, eg *$2 (a star, number N, and + o a multiplier with a commodity symbol, eg *$2 (a star, number N, and symbol S). The matched posting's amount will be multiplied by N, and its commodity symbol will be replaced with S. - Any query term containing spaces must be enclosed in single or double - quotes, as on the command line. Eg, note the quotes around the second + Any query term containing spaces must be enclosed in single or double + quotes, as on the command line. Eg, note the quotes around the second query term below: = expenses:groceries 'expenses:dining out' @@ -1421,24 +1462,24 @@ Auto postings and multiple files An auto posting rule can affect any transaction in the current file, or - in any parent file or child file. Note, currently it will not affect + in any parent file or child file. Note, currently it will not affect sibling files (when multiple -f/--file are used - see #1212). Auto postings and dates - A posting date (or secondary date) in the matched posting, or (taking - precedence) a posting date in the auto posting rule itself, will also + A posting date (or secondary date) in the matched posting, or (taking + precedence) a posting date in the auto posting rule itself, will also be used in the generated posting. Auto postings and transaction balancing / inferred amounts / balance asser- tions Currently, auto postings are added: - o after missing amounts are inferred, and transactions are checked for + o after missing amounts are inferred, and transactions are checked for balancedness, o but before balance assertions are checked. - Note this means that journal entries must be balanced both before and + Note this means that journal entries must be balanced both before and after auto postings are added. This changed in hledger 1.12+; see #893 for background. @@ -1448,11 +1489,11 @@ o generated-posting:= QUERY - shows this was generated by an auto post- ing rule, and the query - o _generated-posting:= QUERY - a hidden tag, which does not appear in + o _generated-posting:= QUERY - a hidden tag, which does not appear in hledger's output. This can be used to match postings generated "just now", rather than generated in the past and saved to the journal. - Also, any transaction that has been changed by auto posting rules will + Also, any transaction that has been changed by auto posting rules will have these tags added: o modified: - this transaction was modified @@ -1463,7 +1504,7 @@ REPORTING BUGS - Report bugs at http://bugs.hledger.org (or on the #hledger IRC channel + Report bugs at http://bugs.hledger.org (or on the #hledger IRC channel or hledger mail list) @@ -1477,7 +1518,7 @@ SEE ALSO - hledger(1), hledger-ui(1), hledger-web(1), hledger-api(1), + hledger(1), hledger-ui(1), hledger-web(1), hledger-api(1), hledger_csv(5), hledger_journal(5), hledger_timeclock(5), hledger_time- dot(5), ledger(1) @@ -1485,4 +1526,4 @@ -hledger 1.18.1 June 2020 hledger_journal(5) +hledger 1.18.99 September 2020 hledger_journal(5) diff -Nru haskell-hledger-lib-1.18.1/hledger-lib.cabal haskell-hledger-lib-1.19.1/hledger-lib.cabal --- haskell-hledger-lib-1.18.1/hledger-lib.cabal 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger-lib.cabal 2020-09-07 22:41:25.000000000 +0000 @@ -4,10 +4,10 @@ -- -- see: https://github.com/sol/hpack -- --- hash: 22c04257dd0778ffcde96603d006e72ff6bf1b3da2ac7c3c12bc4e9df2064863 +-- hash: 4cff01322093c9a60a07a65c0ed150394360e71104dd3e7b52fc6782c5cf5a57 name: hledger-lib -version: 1.18.1 +version: 1.19.1 synopsis: A reusable library providing the core functionality of hledger description: A reusable library containing hledger's core functionality. This is used by most hledger* packages so that they support the same @@ -110,9 +110,9 @@ build-depends: Decimal >=0.5.1 , Glob >=0.9 - , aeson + , aeson >=1 , aeson-pretty - , ansi-terminal >=0.6.2.3 + , ansi-terminal >=0.9 , array , base >=4.9 && <4.15 , base-compat-batteries >=0.10.1 && <0.12 @@ -124,14 +124,13 @@ , cmdargs >=0.10 , containers , data-default >=0.5 - , deepseq , directory , extra >=1.6.3 , fgl >=5.5.4.0 , file-embed >=0.0.10 , filepath , hashtables >=1.2.3.1 - , megaparsec >=7.0.0 && <8.1 + , megaparsec >=7.0.0 && <9.1 , mtl >=2.2.1 , old-time , parsec >=3 @@ -149,6 +148,7 @@ , timeit , transformers >=0.2 , uglymemo + , unordered-containers >=0.2 , utf8-string >=0.3.5 default-language: Haskell2010 @@ -162,9 +162,9 @@ build-depends: Decimal >=0.5.1 , Glob >=0.7 - , aeson + , aeson >=1 , aeson-pretty - , ansi-terminal >=0.6.2.3 + , ansi-terminal >=0.9 , array , base >=4.9 && <4.15 , base-compat-batteries >=0.10.1 && <0.12 @@ -176,7 +176,6 @@ , cmdargs >=0.10 , containers , data-default >=0.5 - , deepseq , directory , doctest >=0.16.3 , extra >=1.6.3 @@ -184,7 +183,7 @@ , file-embed >=0.0.10 , filepath , hashtables >=1.2.3.1 - , megaparsec >=7.0.0 && <8.1 + , megaparsec >=7.0.0 && <9.1 , mtl >=2.2.1 , old-time , parsec >=3 @@ -202,6 +201,7 @@ , timeit , transformers >=0.2 , uglymemo + , unordered-containers >=0.2 , utf8-string >=0.3.5 if (impl(ghc < 8.2)) buildable: False @@ -217,9 +217,9 @@ build-depends: Decimal >=0.5.1 , Glob >=0.9 - , aeson + , aeson >=1 , aeson-pretty - , ansi-terminal >=0.6.2.3 + , ansi-terminal >=0.9 , array , base >=4.9 && <4.15 , base-compat-batteries >=0.10.1 && <0.12 @@ -231,7 +231,6 @@ , cmdargs >=0.10 , containers , data-default >=0.5 - , deepseq , directory , extra >=1.6.3 , fgl >=5.5.4.0 @@ -239,7 +238,7 @@ , filepath , hashtables >=1.2.3.1 , hledger-lib - , megaparsec >=7.0.0 && <8.1 + , megaparsec >=7.0.0 && <9.1 , mtl >=2.2.1 , old-time , parsec >=3 @@ -257,6 +256,7 @@ , timeit , transformers >=0.2 , uglymemo + , unordered-containers >=0.2 , utf8-string >=0.3.5 buildable: True default-language: Haskell2010 diff -Nru haskell-hledger-lib-1.18.1/hledger_timeclock.5 haskell-hledger-lib-1.19.1/hledger_timeclock.5 --- haskell-hledger-lib-1.18.1/hledger_timeclock.5 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_timeclock.5 2020-09-07 22:41:25.000000000 +0000 @@ -1,5 +1,5 @@ -.TH "hledger_timeclock" "5" "June 2020" "hledger 1.18.1" "hledger User Manuals" +.TH "hledger_timeclock" "5" "September 2020" "hledger 1.18.99" "hledger User Manuals" diff -Nru haskell-hledger-lib-1.18.1/hledger_timeclock.info haskell-hledger-lib-1.19.1/hledger_timeclock.info --- haskell-hledger-lib-1.18.1/hledger_timeclock.info 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_timeclock.info 2020-09-07 22:41:25.000000000 +0000 @@ -4,8 +4,8 @@  File: hledger_timeclock.info, Node: Top, Up: (dir) -hledger_timeclock(5) hledger 1.18.1 -*********************************** +hledger_timeclock(5) hledger 1.18.99 +************************************ Timeclock - the time logging format of timeclock.el, as read by hledger diff -Nru haskell-hledger-lib-1.18.1/hledger_timeclock.txt haskell-hledger-lib-1.19.1/hledger_timeclock.txt --- haskell-hledger-lib-1.18.1/hledger_timeclock.txt 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_timeclock.txt 2020-09-07 22:41:25.000000000 +0000 @@ -78,4 +78,4 @@ -hledger 1.18.1 June 2020 hledger_timeclock(5) +hledger 1.18.99 September 2020 hledger_timeclock(5) diff -Nru haskell-hledger-lib-1.18.1/hledger_timedot.5 haskell-hledger-lib-1.19.1/hledger_timedot.5 --- haskell-hledger-lib-1.18.1/hledger_timedot.5 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_timedot.5 2020-09-07 22:41:25.000000000 +0000 @@ -1,5 +1,5 @@ -.TH "hledger_timedot" "5" "June 2020" "hledger 1.18.1" "hledger User Manuals" +.TH "hledger_timedot" "5" "September 2020" "hledger 1.18.99" "hledger User Manuals" diff -Nru haskell-hledger-lib-1.18.1/hledger_timedot.info haskell-hledger-lib-1.19.1/hledger_timedot.info --- haskell-hledger-lib-1.18.1/hledger_timedot.info 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_timedot.info 2020-09-07 22:41:25.000000000 +0000 @@ -4,8 +4,8 @@  File: hledger_timedot.info, Node: Top, Up: (dir) -hledger_timedot(5) hledger 1.18.1 -********************************* +hledger_timedot(5) hledger 1.18.99 +********************************** Timedot - hledger's human-friendly time logging format diff -Nru haskell-hledger-lib-1.18.1/hledger_timedot.txt haskell-hledger-lib-1.19.1/hledger_timedot.txt --- haskell-hledger-lib-1.18.1/hledger_timedot.txt 2020-06-21 17:38:30.000000000 +0000 +++ haskell-hledger-lib-1.19.1/hledger_timedot.txt 2020-09-07 22:41:25.000000000 +0000 @@ -161,4 +161,4 @@ -hledger 1.18.1 June 2020 hledger_timedot(5) +hledger 1.18.99 September 2020 hledger_timedot(5)