I also needed to install Transmission on my DSM 6.0.2 box just now, and want to share my experience in case someone stumbles upon this in the future.
After reading #2216, I got the impression that it might be a bad idea to use `synouser` to create `transmission` the usual way (it seems that the shell would be reset to `/sbin/nologin` upon reboot).
Since the issue ultimately boils down to permission denied when trying to execute
su - transmission -c '/usr/local/transmission/bin/transmission-daemon -g /usr/local/transmission/var -x /usr/local/transmission/var/transmission.pid
as root due to missing entry for `transmission` in `/etc/shadow`, I simply added
transmission:*:10933:0:99999:7:::
to `/etc/shadow`.
This may not be the best idea, but it does work for me (I haven't rebooted so I can't reliably say if this setup would survive a reboot, but according to my understanding of #2216 it should).
You don't need to touch anything else (OP's modifications are useless anyway; the command is quoted, there's no chance of misinterpretation).
---
**TL;DR**: Add
transmission:*:10933:0:99999:7:::
to `/etc/shadow`. (Disclaimer: I'm not responsible for any damage to your box.)
#167 accounts for the URL you encountered. (It doesn't help you with fetching results, but it does lead to a clearer error message: "Connection blocked due to unusual activity.")
googler: treat a redirect url containing 'sorry/index?' as blocked
==================================================================
A user (#166) saw such a URL:
https://ipv6.google.com/sorry/index?continue=https://www.google.com/search...
> Just wanted to know if this is a known problem.
Google does block you for up to a few hours if you send many requests rapidly. This happens even in the browser, so it shouldn't be surprising.
What we typically see is `https://ipv4.google.com/sorry/IndexRedirect?continue=...`, treated in https://github.com/jarun/googler/blob/e37716e84f4e75e83d38742082644b6a72d22d4d/googler#L736-L737. Guess we can add `sorry/index?` to that rule?
Anyway, closing this issue since it's most likely just the mundane "connection blocked due to unusual activity". Please report back if the issue transforms into something else.
> Is this transitory or could I have hit service limits?
I'm afraid this is the question we would ask you. Did you encounter this after some successful sessions? If so, patiently wait a while and try again.
[common] log URLs in more functions with network requests
=========================================================
This is a follow-up to #999.
This commit adds the
<function_name>: <url>
debug message, which was previously only emitted by `get_content` and `post_content`, to all high level utility functions with network requests except `url_size`, `url_save` and `url_save_chunked` (in order not to ruin progress bars).
---
Rationale: I've been very happy with the `get_content: <url>` messages since their introduction, so expansion to all utility functions seems a good idea to me.
To respond to the original review comment in #999:
> Honestly speaking, I've not needed debug messages like this; leaving tcpdump open and you'll get a bit of everything underhood (at least for HTTP-only sites like Youku and Tudou, which is commonly the case for most Chinese websites).
>
> However, it could be easier to tell in which URL you're stuck purely from the debug message of you-get itself. And it works for HTTPS links.
tcpdump works, but it could be very noisy when you're multitasking on one interface. Moreover, if you're running multiple you-get sessions at once, and maybe even have pages from the same website(s) open in the browser at the same time, it's hard to tell which request originated from which session.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/1677)
<!-- Reviewable:end -->
Actually, the existing login API should still work; I just realized the problem was that it didn't percent encode my password.
Anyway, it doesn't really hurt to upgrade to the current login API used by https://account.nicovideo.jp/login.
[nicovideo] fix extractor
=========================
- Rework login. The existing login system appears to be dysfunctional; the cookiejar after a login attempt with the existing `nicovideo_login` function looks like (cookie values have been scrubbed and replaced with xxx):
<CookieJar[<Cookie mail_for_retry=xxx for .nicovideo.jp/>, <Cookie nicosid=xxx for .nicovideo.jp/>]>
and with these cookies, the `getflv` API simply returns
closed=1&done=true
Using the refreshed `nicovideo_login` function, the cookiejar after a login attempt looks like:
<CookieJar[<Cookie nicosid=xxx for .nicovideo.jp/>, <Cookie user_session=user_session_xxx for .nicovideo.jp/>, <Cookie user_session_secure=xxx for .nicovideo.jp/>]>
and the `getflv` API functions correctly with these cookies.
- Make title extraction more robust. Nicovideo.jp seems to serve up two different layouts at the moment, one of them being a beta version. See https://gist.github.com/ed95394afc6eff8a781395ac5afcbe48 for a sample page http://www.nicovideo.jp/watch/sm22221659 in both layouts. (Disclosure: I'm located in U.S., where I'm being served one of the two layouts at random across consecutive runs; YMMV elsewhere.) The title is embedded differently in different layouts.
This commit makes sure both layouts are accounted for.
- http://www.nicovideo.jp/api/getflv is now 301 redirected to http://flapi.nicovideo.jp/api/getflv, so we switch to the new location.
- Switch from deprecated `get_html` to `get_content`.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/1676)
<!-- Reviewable:end -->